|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ASF JIRA | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Displaying 1000 issues at 19/Mar/20 20:35. |
| Project | Key | Summary | Issue Type | Status | Priority | Resolution | Assignee | Reporter | Creator | Created | Last Viewed | Updated | Resolved | Affects Version/s | Fix Version/s | Component/s | Due Date | Votes | Watchers | Images | Original Estimate | Remaining Estimate | Time Spent | Work Ratio | Sub-Tasks | Linked Issues | Environment | Description | Security Level | Progress | Σ Progress | Σ Time Spent | Σ Remaining Estimate | Σ Original Estimate | Labels | Git Notification Mailing List | Github Integration | Git Repository Name | Global Rank | Git Repository Type | Blog Administrator? | Blogs - Admin for blog | Blogs - Username | Blogs - Email Address | Docs Text | Git Repository Import Path | New-TLP-TLPName | Blogs - New Blog Write Access | Epic Colour | Blogs - Existing Blog Name | Enable Automatic Patch Review | Attachment count | Blog - New Blog PMC | Epic Name | Blog - New Blog Administrators | Epic Status | Blog - Write access | Epic Link | Change Category | Bug Category | Bugzilla - List of usernames | Bugzilla - PMC Name | Test and Documentation Plan | Bugzilla - Email Notification Address | Discovered By | Blogs - Existing Blog Access Level | Complexity | Bugzilla - Project Name | Severity | Initial Confluence Contributors | Space Name | Space Description | Space Key | Sprint | Rank (Obsolete) | Project | Machine Readable Info | Review Patch? | Flags | Source Control Link | Authors | Development | Reviewers | Ignite Flags | Date of First Response | Github Integrations - Other | Last public comment date | Skill Level | Affects version (Component) | Backport to Version | Fix version (Component) | Skill Level | Existing GitBox Approval | Protected Branch | GitHub Options | Release Note | Hadoop Flags | Tags | Bugzilla Id | Level of effort | Target Version/s | Bug behavior facts | Lucene Fields | Github Integration - Triggers | Workaround | Bugzilla Id | INFRA - Subversion Repository Path | Testcase included | Estimated Complexity | Regression | Review Date | Evidence Of Use On World Wide Web | Evidence Of Registration | Epic/Theme | Flagged | External issue ID | Priority | Reproduced In | Tags | Since Version | Reviewer | External issue URL | Hadoop Flags | Issue & fix info | Evidence Of Open Source Adoption | Rank | Severity | Tester |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ZooKeeper | ZOOKEEPER-2756 | Add CMake build system for better cross-platform support |
Improvement | Resolved | Major | Fixed | Andrew Schwartzmeyer | Andrew Schwartzmeyer | Andrew Schwartzmeyer | 13/Apr/17 17:55 | 10/Aug/17 14:37 | 10/Aug/17 14:37 | 3.5.2 | build, c client | 0 | 3 | ZOOKEEPER-2841, MESOS-7297 | Windows and Linux | The C bindings primary build system is Autotools. This obviously does not work for Windows, and so the original port to Windows simply added a Visual Studio solution to the project, splitting the build system. As new versions of Visual Studio have come along, new (probably auto-converted) solutions have come along (see zookeeper.sln vs zookeeper-vs2013.sln). When Mesos started being ported to Windows, a Visual Studio 2015 solution was needed, and the previous developer created yet another solution, and setup Mesos' build to patch ZooKeeper and add the 2015 solution. Now Visual Studio 2017 was released, and in the process of moving Mesos ahead, I realized that I would either have to make *yet another* converted solution for ZooKeeper. So instead I tackled the root problem, and ported the Autotools build to CMake, which is a meta-build system which generates files for the in-use platform (whether it be Linux or Solaris or MacOS or Windows). NOTE: I already have this patch, and will submit it. It has a couple TODOs, and some other things in it that were necessary for Mesos that may need to be pulled into separate patches. |
build, windows | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 36 weeks, 5 days ago | 0|i3dm4v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2755 | Allow to subclass ClientCnxnSocketNetty and NettyServerCnxn in order to use Netty Local transport |
New Feature | Resolved | Major | Won't Fix | Enrico Olivelli | Enrico Olivelli | Enrico Olivelli | 13/Apr/17 00:51 | 17/Jan/18 03:02 | 17/Jan/18 03:02 | 3.5.2 | java client, server | 0 | 3 | ClientCnxnSocketNetty and NettyServerCnxn use explicitly InetSocketAddress class to work with network addresses. We can do a little refactoring to use only SocketAddress and make it possible to create subclasses of ClientCnxnSocketNetty and NettyServerCnxn which leverage built-in Netty 'local' channels. Such Netty local channels do not create real sockets and so allow a simple ZooKeeper server + ZooKeeper client to be run on the same JVM without binding to real TCP endpoints. Usecases: Ability to run concurrently on the same machine tests of projects which use ZooKeeper (usually in unit tests the server and the client run inside the same JVM) without dealing with random ports and in general using less network resources Run simplified (standalone, all processes in the same JVM) versions of applications which need a working ZooKeeper ensemble to run. Note: Embedding ZooKeeper server + client on the same JVM has many risks and in general I think we should encourage users to do so, so I in this patch I will not provide official implementations of ClientCnxnSocketNetty and NettyServerCnxn. There will be implementations only inside the test packages, in order to test that most of the features are working with custom socket factories and in particular with the 'LocalAddress' specific subclass of SocketAddress. Note: the 'Local' sockets feature will be available on Netty 4 too |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 9 weeks, 1 day ago | 0|i3dklr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2754 | ZOOKEEPER-3170 Set up Apache Jenkins job that runs the flaky test analyzer script. |
Sub-task | Resolved | Major | Fixed | Michael Han | Michael Han | Michael Han | 11/Apr/17 18:37 | 15/Oct/18 06:27 | 13/Apr/17 17:29 | 3.4.11, 3.5.4, 3.6.0 | tests | 0 | 1 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 49 weeks ago | 0|i3didb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2753 | ZOOKEEPER-3170 Introduce a python script for generating flaky test report |
Sub-task | Resolved | Major | Fixed | Michael Han | Michael Han | Michael Han | 11/Apr/17 18:17 | 15/Oct/18 06:27 | 15/Oct/18 06:20 | 3.6.0 | tests | 0 | 2 | This python script uses Jenkins REST API to query Jenkins build, analyze test results and generate reports of flaky tests over a range of builds across specific time periods. This is preview of what the dashboard it generates would look like: http://home.apache.org/~hanm/dashboard/report.html Pull Request: https://github.com/apache/zookeeper/pull/224 |
tools | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 12 weeks, 6 days ago | 0|i3dicv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2752 | ZOOKEEPER-3170 Introduce ZooKeeper flaky test dashboard |
Sub-task | Open | Major | Unresolved | Michael Han | Michael Han | Michael Han | 11/Apr/17 18:13 | 29/Aug/19 10:56 | tests | 0 | 2 | ZooKeeper flaky test dashboard is a set of tools used to track, analyze, and report flaky tests. It's designed with the goal to automate most of the labor intensive part of work related to tracking, monitoring, analyzing and aggregating unit tests results from various Apache ZooKeeper builds. It also increases the visibility of the tests along with (in long term) the over all quality of the ZooKeeper builds. This work is inspired by similar work done on HBase side, with some the tools borrowed from HBase and customized for ZooKeeper use cases. |
tools | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 29 weeks ago | 0|i3dicf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2751 | Investigate why existing unit tests does not leak connection bean for NIO code path |
Task | Open | Minor | Unresolved | Unassigned | Michael Han | Michael Han | 10/Apr/17 13:11 | 10/Apr/17 13:12 | server | 0 | 3 | ZOOKEEPER-2743 | In ZOOKEEPER-2743 we observed a race condition in Netty code path that could lead to connection bean leaking; similar code pattern exists in NIO code path as well, however existing unit tests never fail when NIO code path is activated. This is a follow up of ZOOKEEPER-2743 to ensure we don't have connection bean leak when NIO code path is used. Deliverable: * Unit tests that fail with connection bean when NIO code path is enabled. Or * Proof / analysis that NIO code path does not leak cnx bean and why. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 49 weeks, 3 days ago | 0|i3dg13: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2750 | ZOOKEEPER-3451 Document SSL Support for Atomic Broadcast protocol |
Sub-task | Closed | Major | Fixed | Andor Molnar | Abraham Fine | Abraham Fine | 07/Apr/17 15:49 | 01/Jul/19 10:53 | 11/Mar/19 15:01 | 3.6.0, 3.5.5 | 0 | 4 | 0 | 12600 | ZOOKEEPER-236 | 100% | 100% | 12600 | 0 | pull-request-available, ssl-tls | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 1 week, 3 days ago | 0|i3ddin: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2749 | ZOOKEEPER-2728 Cleanup findbug warnings in branch-3.4: Experimental Warnings |
Sub-task | Resolved | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 06/Apr/17 23:49 | 24/May/17 10:59 | 08/Apr/17 18:57 | 3.4.10 | 3.4.11 | 0 | 3 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 49 weeks ago | 0|i3dc4n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2748 | Admin command to voluntarily drop client connections |
New Feature | Patch Available | Minor | Unresolved | Marco P. | Marco P. | Marco P. | 06/Apr/17 18:03 | 30/Jan/19 08:37 | server | 0 | 4 | 0 | 600 | In certain circumstances, it would be useful to be able to move clients from one server to another. One example: a quorum that consists of 3 servers (A,B,C) with 1000 active client session, where 900 clients are connected to server A, and the remaining 100 are split over B and C (see example below for an example of how this can happen). A will do a lot more work than B, C. Overall throughput will benefit by having the clients more evenly divided. In case of A failure, all its client will create an avalanche by migrating en masse to a different server. There are other possible use cases for a mechanism to move clients: - Migrate away all clients before a server restart - Migrate away part of clients in response to runtime metrics (CPU/Memory usage, ...) - Shuffle clients after adding more server capacity (i.e. adding Observer nodes) The simplest form of rebalancing which does not require major changes of protocol or client code consists of requesting a server to voluntarily drop some number of connections. Clients should be able to transparently move to a different server. Patch introducing 4-letter commands to shed clients: https://github.com/apache/zookeeper/pull/215 -- -- -- How client imbalance happens in the first place, an example. Imagine servers A, B, C and 1000 clients connected. Initially clients are spread evenly (i.e. 333 clients per server). A: 333 (restarts: 0) B: 333 (restarts: 0) C: 334 (restarts: 0) Now restart servers a few times, always in A, B, C order (e.g. to pick up a software upgrades or configuration changes). Restart A: A: 0 (restarts: 1) B: 499 (restarts: 0) C: 500 (restarts: 0) Restart B: A: 250 (restarts: 1) B: 0 (restarts: 1) C: 750 (restarts: 0) Restart C: A: 625 (restarts: 1) B: 375 (restarts: 1) C: 0 (restarts: 1) The imbalance is pretty bad already. C is idle while A has a lot of work. A second round of restarts makes the situation even worse: Restart A: A: 0 (restarts: 2) B: 688 (restarts: 1) C: 313 (restarts: 1) Restart B: A: 344 (restarts: 2) B: 657 (restarts: 1) C: 0 (restarts: 1) Restart C: A: 673 (restarts: 2) B: 328 (restarts: 1) C: 0 (restarts: 1) Large cluster (5, 7, 9 servers) make the imbalance even more evident. |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 49 weeks, 2 days ago | 0|i3dbt3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2747 | Fix ZooKeeperAdmin Compilation Warning |
Bug | Resolved | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 06/Apr/17 18:01 | 17/Apr/17 20:42 | 17/Apr/17 19:58 | 3.5.2 | 3.5.4, 3.6.0 | 0 | 4 | Currently when compiling ZooKeeper we se a compilation warning: {code} [javac] /zookeeper/src/java/main/org/apache/zookeeper/admin/ZooKeeperAdmin.java:43: warning: [try] auto-closeable resource ZooKeeperAdmin has a member method close() that could throw InterruptedException [javac] public class ZooKeeperAdmin extends ZooKeeper { [javac] ^ [javac] 2 warnings {code} This is due to the implementation of AutoCloseable in the ZooKeeper superclass. That class has a warning suppression and explanation, we should copy it to the ZooKeeperAdmin class. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 48 weeks, 2 days ago | 0|i3dbsv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2746 | Leader hand-off during dynamic reconfig is best effort, while test always expects it |
Test | Resolved | Major | Fixed | Michael Han | Michael Han | Michael Han | 04/Apr/17 19:39 | 17/Apr/17 19:48 | 04/Apr/17 20:03 | 3.5.2 | 3.5.4, 3.6.0 | 0 | 3 | ZOOKEEPER-2135 | When non-trivial config change happens on the leader (e.g. port change, role change) to minimize disruption of quorum we do leader hand off by having the current leader nominate the next leader instead of kicking off a full leader election. However this is a best effort and not a guarantee that the new leader of the quorum will be the nominated leader: it is possible that the nominated leader fail to establish the leadership during sync phase, which leads to new election rounds and a different leader elected. In ReconfigTest.testPortChange one check is the new leader after dynamic reconfig of the current leader has to be a different leader; based on what described earlier this is not always the case as the nominated leader might fail to get a quorum to ack its leadership during sync phase, and the old leader might end up as the new leader. We could either fixed the test by removing the check, or always guarantee that the new leader after dynamic reconfig is the nominated leader (which does not make much sense given the nominated leader also has a possibility to crash.). |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 50 weeks, 1 day ago | 0|i3d7jr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2745 | Node loses data after disk-full event, but successfully joins Quorum |
Bug | Open | Critical | Unresolved | Unassigned | Abhay Bothra | Abhay Bothra | 04/Apr/17 17:55 | 21/Nov/18 20:46 | 3.4.6 | server | 0 | 5 | Ubuntu 12.04 | If disk is full on 1 zookeeper node in a 3 node ensemble, it is able to join the quorum with partial data. Setup: -------- - Running a 3 node zookeeper ensemble on Ubuntu 12.04 as upstart services. Let's call the nodes: A, B and C. Observation: ----------------- - Connecting to 2 (Node A and B) of the 3 nodes and doing an `ls` in zookeeper data directory was giving: /foo /bar /baz But an `ls` on node C was giving: /baz - On node C, the zookeeper data directory had the following files: log.1001 log.1600 snapshot.1000 -> size 200 snapshot.1200 -> size 269 snapshot.1300 -> size 300 - Snapshot sizes on node A and B were in the vicinity of 500KB RCA ------- - Disk was full on node C prior to the creation time of the small snapshot files. - Looking at zookeeper server logs, we observed that zookeeper had crashed and restarted a few times after the first instance of disk full. Everytime time zookeeper starts, it does 3 things: 1. Run the purge task to cleanup old snapshot and txn logs. Our autopurge.snapRetainCount is set to 3. 2. Restore from the most recent valid snapshot and the txn logs that follow. 3. Take part in a leader election - realize it has missed something - become the follower - get diff of missed txns from the current leader - create a new snapshot of its current state. - We confirmed that a valid snapshot of the system had existed prior to, and immediately after the crash. Let's call this snapshot snapshot.800. - Over the next 3 restarts, zookeeper did the following: - Purged older snapshots - Restored from snapshot.800 + txn logs - Synced up with master, tried to write its updated state to a new snapshot. Crashed due to disk full. The snapshot file, even though invalid, had been created. - *Note*: This is the first source of the bug. It might be more appropriate to first write the snapshot to a temporary file, and then rename it snapshot.<txn_id>. That would gives us more confidence in the validity of snapshots in the data dir. - Let's say the snapshot files created above were snapshot.850, snapshot.920 and snapshot.950 - On the 4th restart, the purge task retained the 3 recent snapshots - snapshot.850, snapshot.920, and snapshot.950, and proceeded to purge snapshot.800 and associated txn logs assuming that they were no longer needed. - *Note*: This is the second source of the bug. Instead of retaining the 3 most recent *valid* snapshots, the server just retains 3 most recent snapshots, regardless of their validity. - When restoring, zookeeper doesn't find any valid snapshot logs to restore from. So it tries to reload its state from txn logs starting at zxid 0. However, those transaction logs would have long ago been garbage collected. It reloads from whatever txn logs are present. Let's say the only txn log file present (log.951) contains logs for zxid 951 to 998. It reloads from that log file, syncs with master - gets txns 999 and 1000, and writes the snapshot log snapshot.1000 to disk. Now that we have deleted snapshot.800, we have enough free disk space to write snapshot.1000. From this state onwards, zookeeper will always assume it has the state till txn id 1000, even though it only has state from txn id 951 to 1000. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 17 weeks ago | 0|i3d7dr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2744 | Typos in the comments of ZooKeeper class |
Improvement | Resolved | Trivial | Fixed | Abraham Fine | Ethan Li | Ethan Li | 02/Apr/17 22:56 | 17/Apr/17 20:42 | 17/Apr/17 20:03 | 3.4.10, 3.5.2 | 3.5.4, 3.6.0 | 0 | 5 | In the comments of ZooKeeper class definition, "This special event has type EventNone and state sKeeperStateDisconnected." should be "This special event has EventType None and KeeperState Disconnected." | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 48 weeks, 2 days ago | 0|i3d433: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2743 | Netty connection leaks JMX connection bean upon connection close in certain race conditions. |
Bug | Resolved | Major | Fixed | Michael Han | Michael Han | Michael Han | 01/Apr/17 16:30 | 17/Apr/17 19:44 | 10/Apr/17 13:20 | 3.4.10, 3.5.2 | 3.4.11, 3.5.4, 3.6.0 | server | 0 | 4 | ZOOKEEPER-2707, ZOOKEEPER-2751 | This is a tricky issue found while debugging failure of "flaky" watcher test (ZOOKEEPER-2686). When closing a Netty connection, depend on timing the connection bean registered when the connection was provisioned might not get unregistered, leading to leaked Java beans. The race happens at the time when the client is in the process of finalizing the session. As part of session finalization, a connection bean will be registered [1]. But right before the registering bean, the connection might gets closed, in cases for example the server that the client is connecting to is shutdown. As part of connection close, the bean will be un-registered, as expected [2], however the problem is when we execute at [2], the connection bean might not finish registering at [1], so the unregister of bean is a NOP. What's worse, as part of connection close, we remove this connection from connection factory [3], so future connection close call will get short circuited and directly return; in other words the bean unregister code in connection close call will only get executed once. Depends on luck, the bean might not get unregistered, as previously illustrated. [1] https://github.com/apache/zookeeper/blob/master/src/java/main/org/apache/zookeeper/server/ZooKeeperServer.java#L700 [2] https://github.com/apache/zookeeper/blob/master/src/java/main/org/apache/zookeeper/server/NettyServerCnxn.java#L114 [3] https://github.com/apache/zookeeper/blob/master/src/java/main/org/apache/zookeeper/server/NettyServerCnxn.java#L96 |
netty | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 49 weeks, 3 days ago | 0|i3d3l3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2742 | Few test cases of org.apache.zookeeper.ZooKeeperTest fails in Windows |
Test | Resolved | Trivial | Fixed | Abhishek Kumar | Abhishek Kumar | Abhishek Kumar | 30/Mar/17 11:08 | 04/May/17 06:11 | 29/Apr/17 14:20 | 3.5.4, 3.6.0 | tests | 0 | 4 | Windows | Following test cases fail in Windows environment: 1. org.apache.zookeeper.ZooKeeperTest.testLsrRootCommand() 2. org.apache.zookeeper.ZooKeeperTest.testLsrCommand() It seems that failure is related to use of "\n" (System dependent new line char)in org.apache.zookeeper.ZooKeeperTest.runCommandExpect(CliCommand, List<String>) ...................................... ...................................... String result = byteStream.toString(); assertTrue(result, result.contains( StringUtils.joinStrings(expectedResults, "\n"))); ...................................... ...................................... |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 46 weeks, 5 days ago | 0|i3d093: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2741 | A Swift Connector of ZooKeeper is available now: Perfect-ZooKeeper |
New Feature | Open | Minor | Unresolved | Unassigned | Rockford Wei | Rockford Wei | 28/Mar/17 14:52 | 28/Mar/17 14:54 | contrib | 0 | 1 | Perfect-ZooKeeper is a Swift class wrapper of zookeeper C connector: Source Code: https://github.com/PerfectlySoft/Perfect-ZooKeeper Document: http://www.perfect.org/docs/ZooKeeper.html Perfect is an open source Server Side Swift framework supported by PerfectlySoft Inc. since 2015. We are happy to share this new components as a community contribution. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 51 weeks, 2 days ago | 0|i3cwpr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2740 | Port ZOOKEEPER-2737 to branch-3.4 |
Bug | Resolved | Critical | Fixed | Michael Han | Michael Han | Michael Han | 27/Mar/17 13:35 | 20/Jul/17 13:42 | 20/Jul/17 13:42 | 3.4.9, 3.4.10 | 3.4.11 | server | 0 | 2 | ZOOKEEPER-2737 fix is pending to branch-3.4 because we are in middle of release. This fix should get in after 3.4.10 gets out. cc [~rakeshr]. |
netty | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 35 weeks ago | 0|i3cuk7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2739 | maxClientCnxns not working in NettyServerCnxnFactory |
Bug | Open | Major | Unresolved | Unassigned | Vincent Poon | Vincent Poon | 24/Mar/17 21:22 | 25/Mar/17 00:34 | 3.4.9, 3.5.0, 3.6.0 | 0 | 4 | ZOOKEEPER-2280 | The maxClientCnxns field isn't being used in NettyServerCnxnFactory, and therefore the connection limit isn't observed. See attached test |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 51 weeks, 5 days ago | 0|i3cs9j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2738 | maxClientCnxns not limiting concurrent connections properly |
Bug | Patch Available | Major | Unresolved | Unassigned | Vincent Poon | Vincent Poon | 24/Mar/17 21:12 | 30/Jan/19 08:33 | 3.5.2, 3.6.0 | 0 | 4 | 0 | 600 | The test MaxCnxnsTest is incorrect as it only creates up the maxCnxns number of threads, whereas it should create more. See attached patch When the test is fixed, it fails on master and 3.5, where ZOOKEEPER-1504 removed some synchronization. |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 51 weeks, 2 days ago | 0|i3cs93: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2737 | NettyServerCnxFactory leaks connection if exception happens while writing to a channel. |
Bug | Closed | Critical | Fixed | Michael Han | Michael Han | Michael Han | 24/Mar/17 13:16 | 17/May/17 23:43 | 27/Mar/17 13:32 | 3.5.2 | 3.5.3, 3.6.0 | server | 0 | 4 | ZOOKEEPER-733, ZOOKEEPER-2686 | Found this while debugging occasionally failed unit tests. Currently we do this if exception occurs during writing to a channel with Netty: {code} @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception { LOG.warn("Exception caught " + e, e.getCause()); NettyServerCnxn cnxn = (NettyServerCnxn) ctx.getAttachment(); if (cnxn != null) { if (LOG.isDebugEnabled()) { LOG.debug("Closing " + cnxn); cnxn.close(); } } } {code} So the connection is only closed when debug mode is enabled. This is problematic as lots of clean up code is abstracted inside the close and without proper close the connection we are leaking resources. [Commit log|https://github.com/apache/zookeeper/blob/master/src/java/main/org/apache/zookeeper/server/NettyServerCnxnFactory.java#L147] indicates the issue exists since day 1 with ZOOKEEPER-733. Note the original patch uploaded to ZOOKEEPER-733 has this close call in right place, and the call gets moved around during iteration of the patches w/o gets noticed. |
connection, netty, server | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 51 weeks, 3 days ago | 0|i3cre7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2736 | Add a connection rate limiter |
Improvement | Resolved | Major | Duplicate | Unassigned | Vincent Poon | Vincent Poon | 23/Mar/17 14:30 | 08/Aug/19 13:14 | 08/Aug/19 13:13 | 3.4.9, 3.5.2 | server | 0 | 7 | 0 | 1200 | ZOOKEEPER-3242 | Currently the maxClientCnxns property only limits the aggregate number of connections from a client, but not the rate at which connections can be created. This patch adds a configurable connection rate limiter which limits the rate as well. |
100% | 100% | 1200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 50 weeks, 3 days ago | 0|i3cpt3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2735 | Typo fixes in some scripts |
Bug | Resolved | Trivial | Fixed | Woojin Joe | Woojin Joe | Woojin Joe | 22/Mar/17 21:27 | 12/Jul/17 23:10 | 23/Mar/17 13:22 | 3.6.0 | scripts | 0 | 5 | There are same typos in shell files follows: * https://github.com/apache/zookeeper/blob/master/bin/zkCleanup.sh#L28 * https://github.com/apache/zookeeper/blob/master/bin/zkCli.sh#L28 * https://github.com/apache/zookeeper/blob/master/bin/zkServer.sh#L25 TO-BE need to be changed from *POSTIX* to *POSIX* |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years ago | 0|i3co6v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2734 | 3.5.3 should be a beta release instead of alpha release. |
Task | Closed | Blocker | Fixed | Michael Han | Michael Han | Michael Han | 21/Mar/17 13:52 | 17/May/17 23:44 | 27/Mar/17 23:49 | 3.5.2 | 3.5.3 | build | 0 | 2 | Currently 3.5.3 is tagged as alpha both in build and JIRA. We should reach a consensus on the tag before release. See the email thread on dev list for more details. Deliverable: * A consensus on using beta as the version for 3.5.3 release. * Update build.xml. * Update JIRA (3.5.3-alpha -> 3.5.3-beta). Thread on dev list: http://mail-archives.apache.org/mod_mbox/zookeeper-dev/201703.mbox/%3CCA%2Bi0x1JZacVMQGd_Jb34jSaw7p_nhWpcx9uzHwCwPPF%2BPPrf3g%40mail.gmail.com%3E |
build, release | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 51 weeks, 2 days ago | 0|i3clbj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2733 | ZOOKEEPER-2728 Cleanup findbug warnings in branch-3.4: Dodgy code Warnings |
Sub-task | Resolved | Major | Fixed | Abraham Fine | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 20/Mar/17 01:29 | 21/Jun/17 19:26 | 24/May/17 10:57 | 3.4.10 | 3.4.11 | 0 | 4 | Please refer the attached sheet in parent jira. Below is the details of findbug warnings. {code} DB org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthLearner.send(DataOutputStream, byte[]) uses the same code for two branches DLS Dead store to txn in org.apache.zookeeper.server.quorum.LearnerHandler.packetToString(QuorumPacket) NP Load of known null value in org.apache.zookeeper.server.PrepRequestProcessor.pRequest(Request) NP Possible null pointer dereference in org.apache.zookeeper.server.PurgeTxnLog.purgeOlderSnapshots(FileTxnSnapLog, File) due to return value of called method NP Possible null pointer dereference in org.apache.zookeeper.server.PurgeTxnLog.purgeOlderSnapshots(FileTxnSnapLog, File) due to return value of called method NP Load of known null value in org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthLearner.send(DataOutputStream, byte[]) NP Load of known null value in org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthServer.send(DataOutputStream, byte[], QuorumAuth$Status) NP Possible null pointer dereference in org.apache.zookeeper.server.upgrade.UpgradeMain.copyFiles(File, File, String) due to return value of called method RCN Redundant nullcheck of bytes, which is known to be non-null in org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next() SF Switch statement found in org.apache.zookeeper.server.PrepRequestProcessor.pRequest(Request) where default case is missing SF Switch statement found in org.apache.zookeeper.server.PrepRequestProcessor.pRequest2Txn(int, long, Request, Record, boolean) where default case is missing SF Switch statement found in org.apache.zookeeper.server.quorum.AuthFastLeaderElection$Messenger$WorkerReceiver.run() where default case is missing SF Switch statement found in org.apache.zookeeper.server.quorum.AuthFastLeaderElection$Messenger$WorkerSender.process(AuthFastLeaderElection$ToSend) where default case is missing SF Switch statement found in org.apache.zookeeper.server.quorum.Follower.processPacket(QuorumPacket) where default case is missing SF Switch statement found in org.apache.zookeeper.server.quorum.Observer.processPacket(QuorumPacket) where default case is missing ST Write to static field org.apache.zookeeper.server.SyncRequestProcessor.randRoll from instance method org.apache.zookeeper.server.SyncRequestProcessor.run() UrF Unread public/protected field: org.apache.zookeeper.server.upgrade.DataTreeV1$ProcessTxnResult.err UrF Unread public/protected field: org.apache.zookeeper.server.upgrade.DataTreeV1$ProcessTxnResult.path UrF Unread public/protected field: org.apache.zookeeper.server.upgrade.DataTreeV1$ProcessTxnResult.stat UrF Unread public/protected field: org.apache.zookeeper.server.upgrade.DataTreeV1$ProcessTxnResult.type {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 39 weeks, 1 day ago | 0|i3chzj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2732 | ZOOKEEPER-2728 Cleanup findbug warnings in branch-3.4: Performance Warnings |
Sub-task | Resolved | Major | Fixed | Abraham Fine | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 20/Mar/17 01:25 | 22/May/17 13:58 | 22/May/17 00:26 | 3.4.11 | 0 | 2 | Please refer the attached sheet in parent jira. Below is the details of findbug warnings. {code} Bx Boxing/unboxing to parse a primitive new org.apache.zookeeper.server.quorum.QuorumCnxManager(long, Map, QuorumAuthServer, QuorumAuthLearner, int, boolean, int, boolean) Bx new org.apache.zookeeper.server.quorum.QuorumCnxManager(long, Map, QuorumAuthServer, QuorumAuthLearner, int, boolean, int, boolean) invokes inefficient new Integer(String) constructor; use Integer.valueOf(String) instead Dm org.apache.zookeeper.server.quorum.FastLeaderElection$Notification.toString() invokes inefficient new String(String) constructor WMI org.apache.zookeeper.server.DataTree.dumpEphemerals(PrintWriter) makes inefficient use of keySet iterator instead of entrySet iterator WMI org.apache.zookeeper.server.quorum.flexible.QuorumHierarchical.computeGroupWeight() makes inefficient use of keySet iterator instead of entrySet iterator WMI org.apache.zookeeper.server.quorum.flexible.QuorumHierarchical.containsQuorum(HashSet) makes inefficient use of keySet iterator instead of entrySet iterator WMI org.apache.zookeeper.ZooKeeperMain.usage() makes inefficient use of keySet iterator instead of entrySet iterator {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 43 weeks, 3 days ago | 0|i3chzb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2731 | ZOOKEEPER-2728 Cleanup findbug warnings in branch-3.4: Malicious code vulnerability Warnings |
Sub-task | Resolved | Major | Fixed | Abraham Fine | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 20/Mar/17 01:22 | 24/May/17 12:59 | 24/May/17 12:17 | 3.4.9 | 3.4.11 | 0 | 2 | Please refer the attached sheet in parent jira. Below is the details of findbug warnings. {code} MS org.apache.zookeeper.Environment.JAAS_CONF_KEY isn't final but should be Bug type MS_SHOULD_BE_FINAL (click for details) In class org.apache.zookeeper.Environment Field org.apache.zookeeper.Environment.JAAS_CONF_KEY At Environment.java:[line 34] MS org.apache.zookeeper.server.ServerCnxn.cmd2String is a mutable collection which should be package protected Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) In class org.apache.zookeeper.server.ServerCnxn Field org.apache.zookeeper.server.ServerCnxn.cmd2String At ServerCnxn.java:[line 230] MS org.apache.zookeeper.ZooDefs$Ids.OPEN_ACL_UNSAFE is a mutable collection Bug type MS_MUTABLE_COLLECTION (click for details) In class org.apache.zookeeper.ZooDefs$Ids Field org.apache.zookeeper.ZooDefs$Ids.OPEN_ACL_UNSAFE At ZooDefs.java:[line 100] MS org.apache.zookeeper.ZooKeeperMain.commandMap is a mutable collection which should be package protected Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) In class org.apache.zookeeper.ZooKeeperMain Field org.apache.zookeeper.ZooKeeperMain.commandMap At ZooKeeperMain.java:[line 53] {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 43 weeks, 1 day ago | 0|i3chz3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2730 | ZOOKEEPER-2728 Cleanup findbug warnings in branch-3.4: Disable Internationalization Warnings |
Sub-task | Resolved | Major | Fixed | Janek P | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 20/Mar/17 01:19 | 19/Jun/17 09:27 | 19/Jun/17 06:29 | 3.4.11 | 0 | 4 | Please refer the attached sheet in parent jira. Below is the details of findbug warnings. {code} Dm Found reliance on default encoding in org.apache.jute.compiler.CGenerator.genCode(): new java.io.FileWriter(File) Dm Found reliance on default encoding in org.apache.jute.compiler.CppGenerator.genCode(): new java.io.FileWriter(File) Dm Found reliance on default encoding in org.apache.jute.compiler.JRecord.genCsharpCode(File): new java.io.FileWriter(File) Dm Found reliance on default encoding in org.apache.jute.compiler.JRecord.genJavaCode(File): new java.io.FileWriter(File) Dm Found reliance on default encoding in new org.apache.jute.XmlOutputArchive(OutputStream): new java.io.PrintStream(OutputStream) Dm Found reliance on default encoding in org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(String, int, String, int): new java.io.InputStreamReader(InputStream) Dm Found reliance on default encoding in org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(String, int, String, int): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.ClientCnxn$SendThread.pingRwServer(): new java.io.InputStreamReader(InputStream) Dm Found reliance on default encoding in org.apache.zookeeper.ClientCnxn$SendThread.pingRwServer(): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.auth.DigestAuthenticationProvider.generateDigest(String): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.auth.DigestAuthenticationProvider.handleAuthentication(ServerCnxn, byte[]): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.server.DataTree.updateBytes(String, long): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.server.DataTree.updateBytes(String, long): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.DataTree.updateCount(String, int): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.server.DataTree.updateCount(String, int): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.DataTree.updateQuotaForPath(String): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.NettyServerCnxn$SendBufferWriter.checkFlush(boolean): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.NIOServerCnxn$SendBufferWriter.checkFlush(boolean): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.persistence.FileSnap.<static initializer for FileSnap>(): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.persistence.FileTxnLog.<static initializer for FileTxnLog>(): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.quorum.QuorumPeer.readLongFromFile(String): new java.io.FileReader(File) Dm Found reliance on default encoding in org.apache.zookeeper.server.quorum.QuorumPeer.writeLongToFile(String, long): new java.io.OutputStreamWriter(OutputStream) Dm Found reliance on default encoding in org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(Properties): new java.io.FileReader(File) Dm Found reliance on default encoding in org.apache.zookeeper.server.Request.toString(): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.server.ServerCnxn.<static initializer for ServerCnxn>(): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.server.TraceFormatter.main(String[]): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.server.util.OSMXBean.getMaxFileDescriptorCount(): new java.io.InputStreamReader(InputStream) Dm Found reliance on default encoding in org.apache.zookeeper.server.util.OSMXBean.getOpenFileDescriptorCount(): new java.io.InputStreamReader(InputStream) Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.dump(String, int): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.dump(String, int): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.getTraceMask(String, int): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.kill(String, int): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.kill(String, int): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.ruok(String, int): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.ruok(String, int): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.setTraceMask(String, int, String): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.stat(String, int): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.ServerAdminClient.stat(String, int): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.Shell.runCommand(): new java.io.InputStreamReader(InputStream) Dm Found reliance on default encoding in org.apache.zookeeper.version.util.VerGen.generateFile(File, VerGen$Version, String, String): new java.io.FileWriter(File) Dm Found reliance on default encoding in org.apache.zookeeper.ZooKeeperMain.createQuota(ZooKeeper, String, long, int): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.ZooKeeperMain.createQuota(ZooKeeper, String, long, int): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.ZooKeeperMain.delQuota(ZooKeeper, String, boolean, boolean): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.ZooKeeperMain.delQuota(ZooKeeper, String, boolean, boolean): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain$MyCommandOptions): new String(byte[]) Dm Found reliance on default encoding in org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain$MyCommandOptions): String.getBytes() Dm Found reliance on default encoding in org.apache.zookeeper.ZooKeeperMain.run(): new java.io.InputStreamReader(InputStream) Dm Found reliance on default encoding in org.apache.zookeeper.ZooKeeperMain$1.processResult(int, String, Object, byte[], Stat): new String(byte[]) {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 39 weeks, 3 days ago | 0|i3chyv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2729 | ZOOKEEPER-2728 Cleanup findbug warnings in branch-3.4: Correctness Warnings |
Sub-task | Resolved | Major | Fixed | Abraham Fine | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 20/Mar/17 01:16 | 13/Apr/17 12:57 | 08/Apr/17 18:31 | 3.4.9 | 3.4.11 | 0 | 3 | {code} (1) INT Bad comparison of nonnegative value with 0 in org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthLearner.send(DataOutputStream, byte[]) Bug type INT_BAD_COMPARISON_WITH_NONNEGATIVE_VALUE (click for details) In class org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthLearner In method org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthLearner.send(DataOutputStream, byte[]) Value 0 At SaslQuorumAuthLearner.java:[line 176] (2) INT Bad comparison of nonnegative value with 0 in org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthServer.send(DataOutputStream, byte[], QuorumAuth$Status) Bug type INT_BAD_COMPARISON_WITH_NONNEGATIVE_VALUE (click for details) In class org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthServer In method org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthServer.send(DataOutputStream, byte[], QuorumAuth$Status) Value 0 At SaslQuorumAuthServer.java:[line 170] {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 49 weeks ago | 0|i3chyn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2728 | Clean up findbug warnings in branch-3.4 |
Bug | Resolved | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 20/Mar/17 01:13 | 12/Jul/17 23:04 | 19/Jun/17 06:36 | 3.4.9 | 3.4.11 | 1 | 3 | ZOOKEEPER-2729, ZOOKEEPER-2730, ZOOKEEPER-2731, ZOOKEEPER-2732, ZOOKEEPER-2733, ZOOKEEPER-2749, ZOOKEEPER-2762 | This jira to cleanup findbug warnings reported in branch-3.4 [Branch3.4 FindbugsWarnings.html|https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/444/artifact/build/test/findbugs/newPatchFindbugsWarnings.html] |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 2 years, 39 weeks, 3 days ago | 0|i3chyf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2727 | WARN and stacktrace for normally closed socket |
Bug | Resolved | Major | Won't Fix | Mark Fenes | Andrey | Andrey | 17/Mar/17 11:48 | 23/Mar/18 10:43 | 23/Mar/18 10:43 | 3.4.9 | 0 | 4 | Steps to reproduce: * setup zookeeper * setup TCP load balancer. This balancer should check zookeeper clientPort liveness(healthcheck) by opening and closing TCP connection to clientPort. See https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ or https://www.digitalocean.com/community/tutorials/how-to-create-your-first-digitalocean-load-balancer#step-2-—-creating-the-load-balancer for details. * in logs: {code} 2017-03-17 15:41:19,843 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203) at java.lang.Thread.run(Thread.java:745) {code} Issue is here: https://github.com/apache/zookeeper/blob/5fe68506f217246c7ebd96803f9c78e13ec2f11a/src/java/main/org/apache/zookeeper/server/NIOServerCnxn.java#L322 -1 is a normal socket termination. Expected: * reduce log level to INFO * do not log stacktrace. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 51 weeks, 6 days ago | 0|i3cfm7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2726 | Patch for ZOOKEEPER-2693 introduces potential race condition |
Bug | Closed | Major | Fixed | Kyle Nusbaum | Kyle Nusbaum | Kyle Nusbaum | 16/Mar/17 15:15 | 31/Mar/17 05:01 | 16/Mar/17 16:01 | 3.4.10, 3.5.3, 3.6.0 | 0 | 5 | ZOOKEEPER-2693 | We noticed when porting the patch, that isEnabled is not thread-safe. Synchronizing it and resetWhitelist should solve the issue. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 6 days ago | 0|i3cdvr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2725 | Upgrading to a global session fails with a multiop |
Bug | Resolved | Major | Fixed | Brian Nixon | Brian Nixon | Brian Nixon | 15/Mar/17 18:34 | 12/Jul/17 23:05 | 21/Mar/17 12:50 | 3.5.2 | 3.5.4, 3.6.0 | server | 0 | 5 | On an ensemble with local sessions enabled, when a client with a local session requests the creation of an ephemeral node within a multi-op, the client gets a session expired message. The same multi-op works if the session is already global. This breaks the client's expectation of seamless promotion from local session to global session server-side. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 2 days ago | 0|i3cc13: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2724 | Skip cert files for releaseaudit target. |
Improvement | Closed | Blocker | Fixed | Michael Han | Michael Han | Michael Han | 15/Mar/17 17:21 | 17/May/17 23:44 | 17/Mar/17 10:59 | 3.5.2 | 3.5.3 | build | 0 | 2 | In branch-3.5 release auditing generating warnings against cert files as these files don't contain Apache License (AL) header. I don't think these files should be checked because they are not source files, and we skip them in master branch. We should do the same for branch-3.5 by skipping these cert files as well. This should be fixed before 3.5.3 release. Attach the snippet of warning for reference: {noformat} [rat:report] !????? /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/build/zookeeper-3.5.3-alpha-SNAPSHOT/contrib/rest/conf/keys/rest.cer [rat:report] !????? /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-github-pr-build/build/zookeeper-3.5.3-alpha-SNAPSHOT/src/contrib/rest/conf/keys/rest.cer {noformat} |
build | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 6 days ago | 0|i3cbwf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2723 | ConnectStringParser does not parse correctly if quorum string has znode path |
Bug | Open | Major | Unresolved | Vishal Khandelwal | Vishal Khandelwal | Vishal Khandelwal | 15/Mar/17 07:11 | 18/Apr/17 16:45 | 0 | 3 | f2017-03-14 07:10:26,247 INFO [main] zookeeper.ZooKeeper - Initiating client connection, connectString=x1-1-was.ops.sfdc.net:2181,x2-1-was.ops.sfdc.net:2181,x3-1-was.ops.sfdc.net:2181,x4-1-was.ops.sfdc.net:2181,x5-1-was.ops.sfdc.net:2181:/hbase sessionTimeout=60000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@6e16b8b5 2017-03-14 07:10:26,250 ERROR [main] client.StaticHostProvider - Unable to connect to server: x5-1-was.ops.sfdc.net:2181:2181 java.net.UnknownHostException: x5-1-was.ops.sfdc.net:2181: Name or service not known at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) at java.net.InetAddress.getAllByName0(InetAddress.java:1276) at java.net.InetAddress.getAllByName(InetAddress.java:1192) at java.net.InetAddress.getAllByName(InetAddress.java:1126) at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:60) at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:446) at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:141) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:128) at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:135) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:173) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147) at org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1875) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:82) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:929) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:714) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:466) at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:445) at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:326) | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 48 weeks, 2 days ago | 0|i3caqv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2722 | Flaky Test: org.apache.zookeeper.test.ReadOnlyModeTest.testSessionEstablishment |
Bug | Resolved | Major | Fixed | Michael Han | Michael Han | Michael Han | 14/Mar/17 19:16 | 24/Apr/17 12:55 | 17/Apr/17 19:38 | 3.4.11, 3.5.4, 3.6.0 | tests | 0 | 5 | ZOOKEEPER-2135 | {noformat} Error Message KeeperErrorCode = ConnectionLoss for /test Stacktrace org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /test at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1423) at org.apache.zookeeper.test.ReadOnlyModeTest.testSessionEstablishment(ReadOnlyModeTest.java:238) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.lang.Thread.run(Thread.java:745) {noformat} Looks like we should retry before giving up. |
flaky | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 47 weeks, 3 days ago | 1 | 1 | 0|i3c9xr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2721 | org.apache.zookeeper.server.quorum.ReconfigRecoveryTest fails intermittently |
Test | Open | Major | Unresolved | Unassigned | Sneha Kanekar | Sneha Kanekar | 14/Mar/17 05:31 | 14/Mar/17 05:48 | 3.6.0 | quorum, server, tests | 0 | 2 | Ubuntu:14.04 | The test-suite org.apache.zookeeper.server.quorum.ReconfigRecoveryTest fails intermittently on ppc64le and x86 architechture. I have attached standard output log. The error message is as follows: {code:borderStyle=solid} Testcase: testCurrentServersAreObserversInNextConfig took 90.488 sec FAILED waiting for server 3 being up junit.framework.AssertionFailedError: waiting for server 3 being up at org.apache.zookeeper.server.quorum.ReconfigRecoveryTest.testCurrentServersAreObserversInNextConfig(ReconfigRecoveryTest.java:217) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) {code} Also this issue is related to ZOOKEEPER-1806 and ZOOKEEPER-2080. Both of them are marked as fixed and still i am getting this failure. |
ppc64le, x86 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 1 week, 2 days ago | 0|i3c8fb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2720 | org.apache.zookeeper.test.WatchEventWhenAutoResetTest fails intermittently |
Test | Resolved | Major | Duplicate | Michael Han | Sneha Kanekar | Sneha Kanekar | 14/Mar/17 02:05 | 24/Aug/17 12:49 | 24/Aug/17 12:49 | 3.6.0 | tests | 0 | 3 | ZOOKEEPER-2807, ZOOKEEPER-2135 | Ubuntu:14.04 | The test-suite org.apache.zookeeper.test.WatchEventWhenAutoResetTest fails intermittently. It is failing on ppc64le and x86 architechture. I have attached standard output log. The error message is as follows: {code:borderStyle=solid} Testcase: testNodeDataChanged took 1.959 sec FAILED expected:<NodeDataChanged> but was:<NodeDeleted> junit.framework.AssertionFailedError: expected:<NodeDataChanged> but was:<NodeDeleted> at org.apache.zookeeper.test.WatchEventWhenAutoResetTest$EventsWatcher.assertEvent(WatchEventWhenAutoResetTest.java:67) at org.apache.zookeeper.test.WatchEventWhenAutoResetTest.testNodeDataChanged(WatchEventWhenAutoResetTest.java:117) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) {code} |
ppc64le, x86 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 30 weeks, 1 day ago | 0|i3c83r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2719 | Port ZOOKEEPER-2169 (TTL Nodes) to 3.5 branch |
New Feature | Closed | Major | Fixed | Jordan Zimmerman | Jordan Zimmerman | Jordan Zimmerman | 12/Mar/17 13:38 | 21/Jan/19 08:41 | 17/Mar/17 11:05 | 3.5.3 | java client, server | 3 | 4 | ZOOKEEPER-2169 is a useful feature that should be deployed sooner than later. Take the work done in the master branch and port it to the 3.5 branch | ttl_nodes | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 6 days ago | 0|i3bamf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2718 | org.apache.zookeeper.server.quorum.StandaloneDisabledTest fails intermittently |
Test | Closed | Major | Fixed | Michael Han | Sneha Kanekar | Sneha Kanekar | 10/Mar/17 03:45 | 17/May/17 23:43 | 14/Mar/17 12:06 | 3.6.0 | 3.5.3, 3.6.0 | quorum, server, tests | 0 | 4 | Ubuntu:14.04 | The test-suite org.apache.zookeeper.server.quorum.StandaloneDisabledTest fails intermittently with a timeout error. It fails on x86 and ppc64le architechture. The standard output is as follows: {code:borderStyle=solid} Testsuite: org.apache.zookeeper.server.quorum.StandaloneDisabledTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec Testcase: startSingleServerTest took 0.001 sec Caused an ERROR Timeout occurred. Please note the time in the report does not reflect the time until the timeout. junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout. {code} |
ppc64le, x86 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 1 week, 2 days ago | 0|i3b8cn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2717 | org.apache.zookeeper.server.quorum.RaceConditionTest fails intermittently |
Test | Open | Major | Unresolved | Unassigned | Sneha Kanekar | Sneha Kanekar | 10/Mar/17 02:17 | 04/Apr/17 00:50 | 3.6.0 | quorum, server, tests | 0 | 3 | Ubuntu:14.04 | The test-suite org.apache.zookeeper.server.quorum.RaceConditionTest fails intermittently on ppc64le and x86 architechture with following error message: {code:borderStyle=solid} org.apache.zookeeper.server.quorum.RaceConditionTest.testRaceConditionBetweenLeaderAndAckRequestProcessor Stacktrace: Leader failed to transition to new state. Current state is leading junit.framework.AssertionFailedError: Leader failed to transition to new state. Current state is leading at org.apache.zookeeper.server.quorum.RaceConditionTest.testRaceConditionBetweenLeaderAndAckRequestProcessor(RaceConditionTest.java:82) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:745) {code} Also I have attached the standard output log file. |
ppc64le, x86 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 50 weeks, 2 days ago | 0|i3b867: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2716 | Flaky Test: org.apache.zookeeper.server.SessionTrackerTest.testAddSessionAfterSessionExpiry |
Test | Closed | Major | Fixed | Michael Han | Michael Han | Michael Han | 09/Mar/17 16:46 | 31/Mar/17 05:01 | 16/Mar/17 06:44 | 3.4.9, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | server, tests | 0 | 3 | ZOOKEEPER-2135 | This test fail once in a while because the test logic has time assumptions that should be fixed. PR in a minute. Sample log when it fails: {noformat} Error Message Duplicate session expiry request has been generated expected:<1> but was:<0> Stacktrace junit.framework.AssertionFailedError: Duplicate session expiry request has been generated expected:<1> but was:<0> at org.apache.zookeeper.server.SessionTrackerTest.testAddSessionAfterSessionExpiry(SessionTrackerTest.java:82) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:745) Standard Output 2017-03-04 19:25:16,141 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2017-03-04 19:25:16,358 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2017-03-04 19:25:16,419 [myid:] - INFO [main:ZKTestCase$1@58] - STARTING testAddSessionAfterSessionExpiry 2017-03-04 19:25:16,430 [myid:] - INFO [Time-limited test:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testAddSessionAfterSessionExpiry 2017-03-04 19:25:16,581 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:zookeeper.version=3.5.3-alpha-SNAPSHOT-6d9fc04c052adbc79bbbb1c63f3f00c816fb8e56, built on 03/04/2017 19:24 GMT 2017-03-04 19:25:16,581 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:host.name=jenkins-ubuntu3.apache.org 2017-03-04 19:25:16,581 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:java.version=1.8.0_121 2017-03-04 19:25:16,581 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:java.vendor=Oracle Corporation 2017-03-04 19:25:16,582 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:java.home=/usr/local/asfpackages/java/jdk1.8.0_121/jre 2017-03-04 19:25:16,587 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/javax.servlet-api-3.1.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/jetty-http-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/jetty-io-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/jetty-security-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/jetty-server-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/jetty-servlet-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/jetty-util-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/lib/slf4j-log4j12-1.7.5.jar:/usr/local/asfpackages/ant/apache-ant-1.10.1/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2017-03-04 19:25:16,588 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2017-03-04 19:25:16,588 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:java.io.tmpdir=/tmp 2017-03-04 19:25:16,589 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:java.compiler=<NA> 2017-03-04 19:25:16,590 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:os.name=Linux 2017-03-04 19:25:16,590 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:os.arch=amd64 2017-03-04 19:25:16,592 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:os.version=4.4.0-31-generic 2017-03-04 19:25:16,603 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:user.name=jenkins 2017-03-04 19:25:16,604 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:user.home=/home/jenkins 2017-03-04 19:25:16,604 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:user.dir=/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test 2017-03-04 19:25:16,604 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:os.memory.free=347MB 2017-03-04 19:25:16,604 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:os.memory.max=455MB 2017-03-04 19:25:16,604 [myid:] - INFO [Time-limited test:Environment@109] - Server environment:os.memory.total=362MB 2017-03-04 19:25:16,624 [myid:] - INFO [Time-limited test:ZooKeeperServer@907] - minSessionTimeout set to 6000 2017-03-04 19:25:16,624 [myid:] - INFO [Time-limited test:ZooKeeperServer@916] - maxSessionTimeout set to 60000 2017-03-04 19:25:16,625 [myid:] - INFO [Time-limited test:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/tmp/test1620981504756158168.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/tmp/test1620981504756158168.junit.dir/version-2 2017-03-04 19:25:20,434 [myid:] - INFO [SessionTracker:ZooKeeperServer@391] - Expiring session 0x52fbc, timeout of 3000ms exceeded 2017-03-04 19:25:20,450 [myid:] - INFO [Time-limited test:JUnit4ZKTestRunner$LoggedInvokeMethod@98] - TEST METHOD FAILED testAddSessionAfterSessionExpiry java.lang.AssertionError: Duplicate session expiry request has been generated expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.apache.zookeeper.server.SessionTrackerTest.testAddSessionAfterSessionExpiry(SessionTrackerTest.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:745) 2017-03-04 19:25:20,457 [myid:] - INFO [main:ZKTestCase$1@73] - FAILED testAddSessionAfterSessionExpiry java.lang.AssertionError: Duplicate session expiry request has been generated expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.apache.zookeeper.server.SessionTrackerTest.testAddSessionAfterSessionExpiry(SessionTrackerTest.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:745) 2017-03-04 19:25:20,457 [myid:] - INFO [main:ZKTestCase$1@63] - FINISHED testAddSessionAfterSessionExpiry 2017-03-04 19:25:20,462 [myid:] - INFO [main:ZKTestCase$1@58] - STARTING testCloseSessionRequestAfterSessionExpiry 2017-03-04 19:25:20,475 [myid:] - INFO [Time-limited test:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testCloseSessionRequestAfterSessionExpiry 2017-03-04 19:25:20,476 [myid:] - INFO [Time-limited test:ZooKeeperServer@907] - minSessionTimeout set to 6000 2017-03-04 19:25:20,476 [myid:] - INFO [Time-limited test:ZooKeeperServer@916] - maxSessionTimeout set to 60000 2017-03-04 19:25:20,476 [myid:] - INFO [Time-limited test:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/tmp/test8565681305854282002.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch35_jdk8/build/test/tmp/test8565681305854282002.junit.dir/version-2 2017-03-04 19:25:26,431 [myid:] - INFO [SessionTracker:ZooKeeperServer@391] - Expiring session 0x52fbc, timeout of 3000ms exceeded 2017-03-04 19:25:26,431 [myid:] - INFO [SessionTracker:ZooKeeperServer@391] - Expiring session 0x52fbc, timeout of 3000ms exceeded 2017-03-04 19:25:26,432 [myid:] - INFO [Time-limited test:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 25301 2017-03-04 19:25:26,432 [myid:] - INFO [Time-limited test:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 11 2017-03-04 19:25:26,432 [myid:] - INFO [Time-limited test:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testCloseSessionRequestAfterSessionExpiry 2017-03-04 19:25:26,432 [myid:] - INFO [main:ZKTestCase$1@68] - SUCCEEDED testCloseSessionRequestAfterSessionExpiry 2017-03-04 19:25:26,432 [myid:] - INFO [main:ZKTestCase$1@63] - FINISHED testCloseSessionRequestAfterSessionExpiry {noformat} |
flaky, flaky-build, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 1 week ago | 0|i3b7cf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2715 | Sessions Expire due to Network partitioning in Zookeeper |
Bug | Open | Major | Unresolved | Unassigned | Tharindu Kumara | Tharindu Kumara | 09/Mar/17 02:12 | 10/Mar/17 00:38 | 3.4.9 | c client | 0 | 3 | Recently, carried out a test to to find the behavior of clients when a zookeeper server is isolated from the zookeeper leader. Here used a ensemble of 3 zookeeper servers called A, B and C. And quorum was set up like below. A - Follower B - Leader C - Follower A <==> B <==> C I____________I And 3 clients are connected to ensemble like below. C1 is connected A. Both C1 and A are in the same machine. C2 is connected B. Both C2 and B are in the same machine. C3 is connected C. Both C3 and C are in the same machine. To remove the network link between B and C iptables utility is used. command used: iptables -I INPUT -s Server_B_IP -j DROP iptables -I INPUT -s Server_C_IP -j DROP After removing the link connections looks like below. A <===> B C I________I Simply there is no way to send any packets from zookeeper server B to zookeeper server C and vice versa. But the connection exists between between B and C. And also there is no way to send any packets from B to C3 and vice versa. But the connection exists between between B and C3. Here What I noticed is that the client connected to Zookeeper Server "C", could not connect to the ensemble, resulting a session expiration timeout. For this experiment I used tickTime of 3000ms and client session expiration timeout of 45000ms. And tested with different combinations also. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 2 weeks ago | 0|i3b5vj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2714 | Zookeeper (standalone) failed to start up |
Bug | Resolved | Blocker | Cannot Reproduce | Unassigned | Daniel C | Daniel C | 08/Mar/17 18:17 | 09/Mar/17 11:48 | 09/Mar/17 11:48 | 3.4.6 | server | 0 | 3 | LINUX | We've a standalone ZK setup. Upon restart, it failed to serve requests. Here are the logs: ------------------ 2017-03-05 17:33:58,888 [myid:] - INFO [main:QuorumPeerConfig@103] - Reading configuration from: /zookeeper/zookeeper-3.4.6/conf/zoo.1.cfg 2017-03-05 17:33:58,898 [myid:] - WARN [main:QuorumPeerConfig@293] - No server failure will be tolerated. You need at least 3 servers. 2017-03-05 17:33:58,898 [myid:] - INFO [main:QuorumPeerConfig@340] - Defaulting to majority quorums 2017-03-05 17:33:58,909 [myid:1] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 10 2017-03-05 17:33:58,910 [myid:1] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 5 2017-03-05 17:33:58,911 [myid:1] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started. 2017-03-05 17:33:58,946 [myid:1] - INFO [main:QuorumPeerMain@127] - Starting quorum peer 2017-03-05 17:33:58,966 [myid:1] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed. 2017-03-05 17:33:58,991 [myid:1] - INFO [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181 2017-03-05 17:33:59,016 [myid:1] - INFO [main:QuorumPeer@959] - tickTime set to 2000 2017-03-05 17:33:59,016 [myid:1] - INFO [main:QuorumPeer@979] - minSessionTimeout set to -1 2017-03-05 17:33:59,016 [myid:1] - INFO [main:QuorumPeer@990] - maxSessionTimeout set to -1 2017-03-05 17:33:59,016 [myid:1] - INFO [main:QuorumPeer@1005] - initLimit set to 20 2017-03-05 17:34:01,328 [myid:1] - INFO [main:QuorumPeer@473] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-03-05 17:34:01,332 [myid:1] - INFO [main:QuorumPeer@488] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-03-05 17:34:01,335 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.245.66.147:48198 2017-03-05 17:34:01,339 [myid:1] - INFO [Thread-4:QuorumCnxManager$Listener@504] - My election bind port: server001-internal/10.245.66.137:3888 2017-03-05 17:34:01,346 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-03-05 17:34:01,346 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /10.245.66.147:48198 (no session established for client) 2017-03-05 17:34:01,346 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.245.66.147:48199 2017-03-05 17:34:01,347 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-03-05 17:34:01,347 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /10.245.66.147:48199 (no session established for client) 2017-03-05 17:34:01,347 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.245.66.147:48200 2017-03-05 17:34:01,347 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-03-05 17:34:01,348 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /10.245.66.147:48200 (no session established for client) 2017-03-05 17:34:01,348 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.245.66.147:48201 2017-03-05 17:34:01,348 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.245.66.137:46628 2017-03-05 17:34:01,348 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running ------------------ Is it a race issue during startup? 2017-03-05 17:34:01,346 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 2 weeks ago | 0|i3b5cv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2713 | Create CVE text for ZOOKEEPER-2693 "DOS attack on wchp/wchc four letter words (4lw)" |
Task | Resolved | Blocker | Information Provided | Patrick D. Hunt | Patrick D. Hunt | Patrick D. Hunt | 08/Mar/17 11:21 | 19/Dec/19 18:01 | 09/Oct/17 14:04 | 3.4.11, 3.5.4 | security | 0 | 2 | ZOOKEEPER-2693 | We need to agree to the CVE text for ZOOKEEPER-2693. Let's use the comments here to do so. The assigned CVE number is CVE-2017-5637 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 23 weeks, 3 days ago | 0|i3b4jr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2712 | MiniKdc test case intermittently failing due to principal not found in Kerberos database |
Bug | Closed | Critical | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 08/Mar/17 10:04 | 19/Jul/17 08:56 | 21/Mar/17 21:52 | 3.4.10 | tests | 0 | 4 | ZOOKEEPER-1045 | MiniKdc test cases are intermittently failing due to not finding the principal. Below is the failure stacktrace. {code} 2017-03-08 13:21:10,843 [myid:] - ERROR [NioProcessor-1:AuthenticationService@187] - Error while searching for client learner@EXAMPLE.COM : Client not found in Kerberos database 2017-03-08 13:21:10,843 [myid:] - WARN [NioProcessor-2:KerberosProtocolHandler@241] - Server not found in Kerberos database (7) 2017-03-08 13:21:10,845 [myid:] - WARN [NioProcessor-2:KerberosProtocolHandler@242] - Server not found in Kerberos database (7) 2017-03-08 13:21:10,844 [myid:] - WARN [NioProcessor-1:KerberosProtocolHandler@241] - Client not found in Kerberos database (6) 2017-03-08 13:21:10,845 [myid:] - WARN [NioProcessor-1:KerberosProtocolHandler@242] - Client not found in Kerberos database (6) {code} Will attach the detailed log to jira. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 1 day ago | 0|i3b4ev: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2711 | Deadlock between concurrent 4LW commands that iterate over connections with Netty server |
Bug | Open | Critical | Unresolved | Josh Elser | Josh Elser | Josh Elser | 07/Mar/17 13:19 | 31/Jan/19 04:46 | 0 | 5 | 0 | 2400 | Observed the following issue in some $dayjob testing environments. Line numbers are a little off compared to master/branch-3.5, but I did confirm the same issue exists there.
With the NettyServerCnxnFactory, before a request is dispatched, the code synchronizes on the {{NettyServerCnxn}} object. However, with some 4LW commands (like {{stat}}), each {{ServerCnxn}} object is also synchronized to (safely) iterate over the internal contents of the object to generate the necessary debug message. As such, multiple concurrent {{stat}} commands can both lock their own {{NettyServerCnxn}} objects, and then be blocked waiting to lock each others' {{ServerCnxn}} in the {{StatCommand}}, deadlocked. {noformat} "New I/O worker #55": at org.apache.zookeeper.server.ServerCnxn.dumpConnectionInfo(ServerCnxn.java:407) - waiting to lock <0x00000000fabc01b8> (a org.apache.zookeeper.server.NettyServerCnxn) at org.apache.zookeeper.server.NettyServerCnxn$StatCommand.commandRun(NettyServerCnxn.java:478) at org.apache.zookeeper.server.NettyServerCnxn$CommandThread.run(NettyServerCnxn.java:311) at org.apache.zookeeper.server.NettyServerCnxn$CommandThread.start(NettyServerCnxn.java:306) at org.apache.zookeeper.server.NettyServerCnxn.checkFourLetterWord(NettyServerCnxn.java:677) at org.apache.zookeeper.server.NettyServerCnxn.receiveMessage(NettyServerCnxn.java:790) at org.apache.zookeeper.server.NettyServerCnxnFactory$CnxnChannelHandler.processMessage(NettyServerCnxnFactory.java:211) at org.apache.zookeeper.server.NettyServerCnxnFactory$CnxnChannelHandler.messageReceived(NettyServerCnxnFactory.java:135) - locked <0x00000000fab68178> (a org.apache.zookeeper.server.NettyServerCnxn) at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) "New I/O worker #51": at org.apache.zookeeper.server.ServerCnxn.dumpConnectionInfo(ServerCnxn.java:407) - waiting to lock <0x00000000fab68178> (a org.apache.zookeeper.server.NettyServerCnxn) at org.apache.zookeeper.server.NettyServerCnxn$StatCommand.commandRun(NettyServerCnxn.java:478) at org.apache.zookeeper.server.NettyServerCnxn$CommandThread.run(NettyServerCnxn.java:311) at org.apache.zookeeper.server.NettyServerCnxn$CommandThread.start(NettyServerCnxn.java:306) at org.apache.zookeeper.server.NettyServerCnxn.checkFourLetterWord(NettyServerCnxn.java:677) at org.apache.zookeeper.server.NettyServerCnxn.receiveMessage(NettyServerCnxn.java:790) at org.apache.zookeeper.server.NettyServerCnxnFactory$CnxnChannelHandler.processMessage(NettyServerCnxnFactory.java:211) at org.apache.zookeeper.server.NettyServerCnxnFactory$CnxnChannelHandler.messageReceived(NettyServerCnxnFactory.java:135) - locked <0x00000000fabc01b8> (a org.apache.zookeeper.server.NettyServerCnxn) at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {noformat} It would appear that the synchronization on the {{NettyServerCnxn}} in {{NettyServerCnxnFactory}} is to blame (and I can see why it was done originally). I think we can just use a different Object (and monitor) to provide mutual exclusion at Netty layer (and avoid synchronization issues at the "application" layer). |
100% | 100% | 2400 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 17 weeks ago | 0|i3b2gv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2710 | Regenerate documentation for branch-3.4 release |
Bug | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 07/Mar/17 12:06 | 31/Mar/17 05:01 | 07/Mar/17 12:34 | 3.4.10 | 3.4.10 | documentation | 0 | 2 | This jira can be used to regenerate the documentation as some of the recent commits didn't regenerated the doc section. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 2 weeks, 2 days ago | 0|i3b2bz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2709 | Clarify documentation around "auth" ACL scheme |
Task | Closed | Minor | Fixed | Josh Elser | Josh Elser | Josh Elser | 03/Mar/17 16:28 | 17/May/17 23:44 | 08/Mar/17 20:49 | 3.5.3, 3.6.0 | documentation | 0 | 4 | HBASE-17717 | We recently found up in HBASE-17717 that we were incorrectly setting an ACL on our "sensitive" znodes after the output of {{getACL}} on these nodes didn't match what was expected. In referencing the documentation about how the {{auth}} ACL scheme was supposed to work, it was unclear if it was a ZooKeeper bug or an HBase bug. After reading some ZooKeeper code, we found that it was an HBase bug, but it would be nice to clarify the docs around this ACL scheme. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 1 week, 3 days ago | 0|i3axvj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2708 | TracelogFile not being created. |
Bug | Open | Minor | Unresolved | Unassigned | Angelo Esquivel | Angelo Esquivel | 03/Mar/17 14:59 | 03/Mar/17 14:59 | 3.4.6 | 0 | 2 | Windows 10 64bit | We are configuring Zookeeper with log4j to create a tracelog file separated from the zookeeper.log. We have test using the following java properties: call %JAVA% "-DrequestTraceFile" "-Dzookeeper.log.dir=%ZOO_LOG_DIR%" "-Dzookeeper.root.logger=%ZOO_LOG4J_PROP%" -cp "%CLASSPATH%" %ZOOMAIN% "%ZOOCFG%" %* Is there a way to set this in a separate file? If not, can this be included in the zookeeper.log content? Please let us know if there is a way. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 2 weeks, 6 days ago | 0|i3axnz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2707 | ZOOKEEPER-2686 Fix "Unexpected bean exists!" issue in WatcherTests |
Sub-task | Resolved | Major | Fixed | Michael Han | Abraham Fine | Abraham Fine | 23/Feb/17 16:26 | 10/Apr/17 13:25 | 10/Apr/17 13:25 | 3.4.10, 3.5.3 | 3.4.11, 3.6.0 | tests | 0 | 2 | ZOOKEEPER-2743 | All the WatcherTests occasionally fail with: {code} Error Message: Unexpected bean exists! expected:<0> but was:<1> Stack Trace: junit.framework.AssertionFailedError: Unexpected bean exists! expected:<0> but was:<1> at org.apache.zookeeper.test.ClientBase.verifyUnexpectedBeans(ClientBase.java:498) at org.apache.zookeeper.test.ClientBase.startServer(ClientBase.java:477) at org.apache.zookeeper.test.ClientBase.setUp(ClientBase.java:460) at org.apache.zookeeper.test.WatcherTest.setUp(WatcherTest.java:76) {code} Here is an example: https://builds.apache.org/job/ZooKeeper_branch35_openjdk7/422/ |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 49 weeks, 3 days ago | 0|i3ajen: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2706 | checkstyle broken on branch-3.4 |
Bug | Closed | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 23/Feb/17 14:54 | 31/Mar/17 05:01 | 22/Mar/17 22:14 | 3.4.10 | 0 | 4 | While working on ZOOKEEPER-2696, [~rakeshr] and I noticed that checkstyle is failing to execute on branch-3.4 with the following error: {code} BUILD FAILED /Users/abefine/cloudera_code/zookeeper/build.xml:1595: Unable to create a Checker: cannot initialize module PackageHtml - Unable to instantiate PackageHtml {code} This should essentially be a backport of ZOOKEEPER-412 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years ago | 0|i3aj5r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2705 | Container node remains indefinitely after session has long expired! |
Bug | Open | Major | Unresolved | Unassigned | Steve Fitzgerald | Steve Fitzgerald | 23/Feb/17 07:24 | 24/Feb/17 11:43 | 3.5.1 | quorum | 0 | 3 | ZOOKEEPER-2464 | 5 x RHEL 2.6.32-431.29.2.el6.x86_64 | Zookeeper version: 3.5.1-alpha Curator Framework version: 3.2.0 We have a 5 node cluster. When we register a service instance everything is created within zookeeper successfully, e.g. for a service names "fake-test-service" I can see the following created: 1. /api/enablement/fake-test-service 2. /api/enablement/fake-test-service/bb831396-5c55-4456-a7c0-5950ba294fd5 When I abnormally kill (kill -9) the process that the service is registered from I expect both of the above to get removed by zookeeper when it expires the session. But only /api/enablement/fake-test-service/bb831396-5c55-4456-a7c0-5950ba294fd5 gets removed successfully. Here is a snippet of the log file: {noformat} 2017-02-23 05:50:00,977 [myid:5] - TRACE [SessionTracker:SessionTrackerImpl@208][] - Session closing: 0x502dbce4df60000 2017-02-23 05:50:00,977 [myid:5] - INFO [SessionTracker:ZooKeeperServer@384][] - Expiring session 0x502dbce4df60000, timeout of 40000ms exceeded 2017-02-23 05:50:00,977 [myid:5] - INFO [SessionTracker:QuorumZooKeeperServer@132][] - Submitting global closeSession request for session 0x502dbce4df60000 2017-02-23 05:50:00,977 [myid:5] - TRACE [ProcessThread(sid:5 cport:-1)::ZooTrace@90][] - :Psessionid:0x502dbce4df60000 type:closeSession cxid:0x0 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 2017-02-23 05:50:00,978 [myid:5] - TRACE [ProcessThread(sid:5 cport:-1)::SessionTrackerImpl@208][] - Session closing: 0x502dbce4df60000 2017-02-23 05:50:00,978 [myid:5] - INFO [ProcessThread(sid:5 cport:-1)::PrepRequestProcessor@649][] - Processed session termination for sessionid: 0x502dbce4df60000 2017-02-23 05:50:00,978 [myid:5] - DEBUG [ProcessThread(sid:5 cport:-1)::CommitProcessor@340][] - Processing request:: sessionid:0x502dbce4df60000 type:closeSession cxid:0x0 zxid:0x1d00000003 txntype:-11 reqpath:n/a 2017-02-23 05:50:00,978 [myid:5] - DEBUG [ProcessThread(sid:5 cport:-1)::Leader@1066][] - Proposing:: sessionid:0x502dbce4df60000 type:closeSession cxid:0x0 zxid:0x1d00000003 txntype:-11 reqpath:n/a 2017-02-23 05:50:00,981 [myid:5] - TRACE [SyncThread:5:Leader@787][] - Ack zxid: 0x1d00000003 2017-02-23 05:50:00,981 [myid:5] - TRACE [SyncThread:5:Leader@790][] - outstanding proposal: 0x1d00000003 2017-02-23 05:50:00,981 [myid:5] - TRACE [SyncThread:5:Leader@793][] - outstanding proposals all 2017-02-23 05:50:00,982 [myid:5] - TRACE [LearnerHandler-/10.24.128.164:38716:Leader@787][] - Ack zxid: 0x1d00000003 2017-02-23 05:50:00,982 [myid:5] - TRACE [LearnerHandler-/10.24.128.164:38716:Leader@790][] - outstanding proposal: 0x1d00000003 2017-02-23 05:50:00,982 [myid:5] - TRACE [LearnerHandler-/10.24.128.164:38716:Leader@793][] - outstanding proposals all 2017-02-23 05:50:00,982 [myid:5] - TRACE [LearnerHandler-/10.24.128.161:55588:Leader@787][] - Ack zxid: 0x1d00000003 2017-02-23 05:50:00,982 [myid:5] - TRACE [LearnerHandler-/10.24.128.161:55588:Leader@790][] - outstanding proposal: 0x1d00000003 2017-02-23 05:50:00,982 [myid:5] - TRACE [LearnerHandler-/10.24.128.161:55588:Leader@793][] - outstanding proposals all 2017-02-23 05:50:00,982 [myid:5] - DEBUG [LearnerHandler-/10.24.128.161:55588:CommitProcessor@327][] - Committing request:: sessionid:0x502dbce4df60000 type:closeSession cxid:0x0 zxid:0x1d00000003 txntype:-11 reqpath:n/a 2017-02-23 05:50:00,982 [myid:5] - TRACE [LearnerHandler-/10.24.128.162:47580:Leader@787][] - Ack zxid: 0x1d00000003 2017-02-23 05:50:00,982 [myid:5] - TRACE [LearnerHandler-/10.24.128.162:47580:Leader@793][] - outstanding proposals all 2017-02-23 05:50:00,983 [myid:5] - DEBUG [LearnerHandler-/10.24.128.162:47580:Leader@808][] - outstanding is 0 2017-02-23 05:50:00,983 [myid:5] - TRACE [LearnerHandler-/10.24.128.160:41119:Leader@787][] - Ack zxid: 0x1d00000003 2017-02-23 05:50:00,983 [myid:5] - TRACE [LearnerHandler-/10.24.128.160:41119:Leader@793][] - outstanding proposals all 2017-02-23 05:50:00,983 [myid:5] - DEBUG [LearnerHandler-/10.24.128.160:41119:Leader@808][] - outstanding is 0 2017-02-23 05:50:00,983 [myid:5] - DEBUG [CommitProcWorkThread-1:FinalRequestProcessor@91][] - Processing request:: sessionid:0x502dbce4df60000 type:closeSession cxid:0x0 zxid:0x1d00000003 txntype:-11 reqpath:n/a 2017-02-23 05:50:00,983 [myid:5] - TRACE [CommitProcWorkThread-1:ZooTrace@90][] - :Esessionid:0x502dbce4df60000 type:closeSession cxid:0x0 zxid:0x1d00000003 txntype:-11 reqpath:n/a 2017-02-23 05:50:00,983 [myid:5] - DEBUG [CommitProcWorkThread-1:DataTree@1034][] - Deleting ephemeral node /api/enablement/fake-test-service/bb831396-5c55-4456-a7c0-5950ba294fd5 for session 0x502dbce4df60000 2017-02-23 05:50:00,983 [myid:5] - DEBUG [CommitProcWorkThread-1:SessionTrackerImpl@218][] - Removing session 0x502dbce4df60000 2017-02-23 05:50:00,983 [myid:5] - TRACE [CommitProcWorkThread-1:ZooTrace@71][] - SessionTrackerImpl --- Removing session 0x502dbce4df60000 2017-02-23 05:50:00,984 [myid:5] - DEBUG [CommitProcWorkThread-1:NettyServerCnxnFactory@411][] - closeSession sessionid:0x361092599260774400 2017-02-23 05:50:00,984 [myid:5] - DEBUG [CommitProcWorkThread-1:NettyServerCnxnFactory@411][] - closeSession sessionid:0x361092599260774400 2017-02-23 05:50:03,525 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxnFactory$CnxnChannelHandler@156][] - message received called BigEndianHeapChannelBuffer(ridx=0, widx=12, cap=12) 2017-02-23 05:50:03,527 [myid:5] - DEBUG [New I/O worker #5:NettyServerCnxnFactory$CnxnChannelHandler@160][] - New message [id: 0xd28589b8, /10.24.128.113:41935 => /10.24.128.165:2281] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=12, cap=12) from [id: 0xd28589b8, /10.24.128.113:41935 => /10.24.128.165:2281] 2017-02-23 05:50:03,527 [myid:5] - DEBUG [New I/O worker #5:NettyServerCnxnFactory$CnxnChannelHandler@175][] - 502d2842d930004 queuedBuffer: null 2017-02-23 05:50:03,527 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxnFactory$CnxnChannelHandler@202][] - 502d2842d930004 buf 0x00000008fffffffe0000000b 2017-02-23 05:50:03,527 [myid:5] - DEBUG [New I/O worker #5:NettyServerCnxnFactory$CnxnChannelHandler@221][] - not throttled 2017-02-23 05:50:03,527 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxn@355][] - message readable 12 bblenrem 4 2017-02-23 05:50:03,528 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxn@360][] - 502d2842d930004 bbLen 0x 2017-02-23 05:50:03,528 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxn@375][] - 502d2842d930004 bbLen 0x00000008 2017-02-23 05:50:03,528 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxn@382][] - 502d2842d930004 bbLen len is 8 2017-02-23 05:50:03,528 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxn@302][] - message readable 8 bb len 8 java.nio.HeapByteBuffer[pos=0 lim=8 cap=8] 2017-02-23 05:50:03,529 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxn@306][] - 502d2842d930004 bb 0x 2017-02-23 05:50:03,529 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxn@320][] - after readBytes message readable 0 bb len 0 java.nio.HeapByteBuffer[pos=8 lim=8 cap=8] 2017-02-23 05:50:03,529 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxn@325][] - after readbytes 502d2842d930004 bb 0xfffffffe0000000b 2017-02-23 05:50:03,530 [myid:5] - DEBUG [ProcessThread(sid:5 cport:-1)::SessionTrackerImpl@291][] - Checking session 0x502d2842d930004 2017-02-23 05:50:03,530 [myid:5] - DEBUG [ProcessThread(sid:5 cport:-1)::CommitProcessor@340][] - Processing request:: sessionid:0x502d2842d930004 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 2017-02-23 05:50:03,530 [myid:5] - DEBUG [CommitProcWorkThread-1:FinalRequestProcessor@91][] - Processing request:: sessionid:0x502d2842d930004 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 2017-02-23 05:50:03,530 [myid:5] - DEBUG [CommitProcWorkThread-1:FinalRequestProcessor@178][] - sessionid:0x502d2842d930004 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 2017-02-23 05:50:03,531 [myid:5] - TRACE [New I/O worker #5:NettyServerCnxnFactory$CnxnChannelHandler@267][] - write complete [id: 0xd28589b8, /10.24.128.113:41935 => /10.24.128.165:2281] WRITTEN_AMOUNT: 85 2017-02-23 05:50:04,275 [myid:5] - ERROR [ContainerManagerTask:ContainerManager$1@84][] - Error checking containers java.lang.NullPointerException at org.apache.zookeeper.server.ContainerManager.getCandidates(ContainerManager.java:151) at org.apache.zookeeper.server.ContainerManager.checkContainers(ContainerManager.java:111) at org.apache.zookeeper.server.ContainerManager$1.run(ContainerManager.java:78) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) 2017-02-23 05:50:11,569 [myid:5] - TRACE [New I/O worker #2:NettyServerCnxnFactory$CnxnChannelHandler@156][] - message received called BigEndianHeapChannelBuffer(ridx=0, widx=12, cap=12) 2017-02-23 05:50:11,569 [myid:5] - DEBUG [New I/O worker #2:NettyServerCnxnFactory$CnxnChannelHandler@160][] - New message [id: 0x677c2a25, /10.157.130.185:60591 => /10.24.128.165:2181] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=12, cap=12) from [id: 0x677c2a25, /10.157.130.185:60591 => /10.24.128.165:2181] 2017-02-23 05:50:11,570 [myid:5] - DEBUG [New I/O worker #2:NettyServerCnxnFactory$CnxnChannelHandler@175][] - 10145a3f4f803e5 queuedBuffer: null 2017-02-23 05:50:11,570 [myid:5] - TRACE [New I/O worker #2:NettyServerCnxnFactory$CnxnChannelHandler@202][] - 10145a3f4f803e5 buf 0x00000008fffffffe0000000b {noformat} I believe the NullPointerException in the log above is what makes it fail to remove the remaining /api/enablement/fake-test-service directory. Could someone shed some light on why this might be happening? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 3 weeks, 6 days ago | 0|i3aicn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2704 | ZOOKEEPER-2703 Run Jepsen against branch-3.5 / master of ZooKeeper |
Sub-task | Open | Major | Unresolved | Unassigned | Michael Han | Michael Han | 22/Feb/17 20:16 | 14/Dec/19 06:09 | 3.5.2 | 3.7.0 | tests | 0 | 5 | The [Jepsen report|https://aphyr.com/posts/291-jepsen-zookeeper] on ZooKeeper was using an old version of ZooKeeper (3.4.5). It would be good to run Jepsen on trunk / branch-3.5 and see what happens. This will also give our confidence on the quality of the upcoming 3.5 stable release. | test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 3 weeks, 5 days ago | 0|i3ahh3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2703 | [MASTER ISSUE] Create benchmark/stability tests |
Test | Open | Major | Unresolved | Unassigned | Jordan Zimmerman | Jordan Zimmerman | 22/Feb/17 14:47 | 02/Mar/17 11:07 | java client, recipes, tests | 0 | 4 | ZOOKEEPER-2704 | It would be useful to have objective tests/benchmarks. These tests/benchmarks can be used to validate future changes to ZooKeeper, compare against other similar products (etcd/consul, etc.) or to help promote ZooKeeper. Possible candidates include: * leader election tests/benchmarks * service discovery tests/benchmarks * distributed locks tests/benchmarks * ... Note: each test/benchmark should be a sub-task under this master task |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 3 weeks ago | 0|i3agtr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2702 | zookeeper ensemble took 20 minutes to come back up after leader failed |
Bug | Open | Major | Unresolved | Unassigned | gopalakrishna | gopalakrishna | 22/Feb/17 07:42 | 22/Feb/17 07:52 | 3.4.9 | 1 | 4 | OS version is ubuntu 14.04(trusty) | Zookeeper version : 3.4.9 OS version is ubuntu 14.04(trusty) Default configuration of zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 I have setup the zookeeper ensemble with three servers zk1.com, zk2.com, zk3.com. Initial State: ZK1(FOLLOWER)---ZK2(LEADER)-------ZK3(FOLLOWER) This morning, ZK2(LEADER) went down and it became a FOLLOWER with in fraction of seconds. It took 20 minutes for new LEADER to be decided for the ensemble. ZK3 was the new LEADER. New State: ZK(FOLLOWER)----ZK2(FOLLOWER)-----ZK3(LEADER) (after 20 minutes). Can somone help me to debug what happened? Zookeeper is managing the solr cloud 2shards, 4 nodes. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 4 weeks, 1 day ago | 0|i3afyf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2701 | Timeout for RecvWorker is too long |
Bug | Open | Major | Unresolved | Unassigned | Jiafu Jiang | Jiafu Jiang | 20/Feb/17 01:22 | 16/Jul/18 03:56 | 3.4.8, 3.4.9, 3.4.10, 3.4.11 | 2 | 6 | Centos6.5 ZooKeeper 3.4.8 |
Environment:
I deploy ZooKeeper in a cluster of three nodes. Each node has three network interfaces(eth0, eth1, eth2). Hostname is used instead of IP address in zoo.cfg, and quorumListenOnAllIPs=true Probleam: I start three ZooKeeper servers( node A, node B, and node C) one by one, when the leader election finishes, node B is the leader. Then I shutdown one network interface of node A by command "ifdown eth0". The ZooKeeper server on node A will lost connection to node B and node C. In my test, I will take about 20 minites that the ZooKeepr server of node A realizes the event and try to call the QuorumServer.recreateSocketAddress the resolve the hostname. I try to read the source code, and I find the code in {code:java|title=QuorumCnxManager.java:|borderStyle=solid} class RecvWorker extends ZooKeeperThread { Long sid; Socket sock; volatile boolean running = true; final DataInputStream din; final SendWorker sw; RecvWorker(Socket sock, DataInputStream din, Long sid, SendWorker sw) { super("RecvWorker:" + sid); this.sid = sid; this.sock = sock; this.sw = sw; this.din = din; try { // OK to wait until socket disconnects while reading. sock.setSoTimeout(0); } catch (IOException e) { LOG.error("Error while accessing socket for " + sid, e); closeSocket(sock); running = false; } } ... } {code} I notice that the soTime is set to 0 in RecvWorker constructor. I think this is reasonable when the IP address of a ZooKeeper server never change, but considering that the IP address of each ZooKeeper server may change, maybe we should better set a timeout here. I think this is a problem. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 35 weeks, 3 days ago | 0|i3ab9r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2700 | Force ZooKeeper to generate snapshot |
Improvement | Open | Minor | Unresolved | Unassigned | Flier Lu | Flier Lu | 17/Feb/17 08:55 | 30/Jan/19 08:22 | 0 | 3 | 0 | 600 | ZOOKEEPER-1729 | When cold backup or remote offline sync Zookeeper instances, we need the latest snapshot. Add a four letter `snap` command to force Zookeeper to generate snapshot. |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 2 weeks, 1 day ago | 0|i3a8of: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2699 | Restrict 4lw commands based on client IP |
Bug | Resolved | Major | Won't Fix | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 17/Feb/17 02:54 | 28/Apr/17 06:49 | 22/Feb/17 10:11 | security, server | 0 | 3 | Currently 4lw commands are executed without authentication and can be accessed from any IP which has access to ZooKeeper server. ZOOKEEPER-2693 attempts to limit the 4lw commands which are enabled by default or enabled by configuration. In addition to ZOOKEEPER-2693 we should also restrict 4lw commands based on client IP as well. It is required for following scenario # User wants to enable all the 4lw commands # User wants to limit the access of the commands which are considered to be safe by default. *Implementation:* we can introduce new property 4lw.commands.host.whitelist # By default we allow all the hosts, but off course only on the 4lw exposed commands as per the ZOOKEEPER-2693 # It can be configured to allow individual IPs(192.168.1.2,192.168.1.3 etc.) # It can also be configured to allow group of IPs like 192.168.1.* |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 46 weeks, 6 days ago | 0|i3a807: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2698 | SSL support for server to server communication |
New Feature | Resolved | Major | Duplicate | Abraham Fine | Abraham Fine | Abraham Fine | 16/Feb/17 15:28 | 02/Mar/17 13:36 | 02/Mar/17 13:36 | 0 | 2 | ZOOKEEPER-236 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 5 weeks ago | 0|i3a73j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2697 | Handle graceful stop of ZookKeeper client |
Improvement | Resolved | Critical | Fixed | Enrico Olivelli | Enrico Olivelli | Enrico Olivelli | 16/Feb/17 08:18 | 12/Jul/17 23:08 | 01/May/17 11:19 | 3.4.9 | 3.5.4, 3.6.0 | java client | 0 | 4 | ZOOKEEPER-1394, CURATOR-408 | As seen in ZOOKEEPER-1394 I would like to have the "close" which waits for all background activities to finish. In tests the method "testableWaitForShutdown" is used. We can add a new ZooKeeper.close(int tineout) method which will act as testableWaitForShutdown, joining all support threads. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 46 weeks, 3 days ago | 0|i3a6fj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2696 | Eclipse ant task no longer determines correct classpath for tests after ZOOKEEPER-2689 |
Bug | Closed | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 14/Feb/17 19:01 | 31/Mar/17 05:01 | 27/Feb/17 01:13 | 3.4.10 | 3.4.10 | build | 0 | 4 | Following the changes made in ZOOKEEPER-2689 IDE's using the .classpath file generated by the eclipse ant task (I tested both idea and eclipse) cannot compile the tests. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 3 weeks, 3 days ago | 0|i3a38v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2695 | Handle unknown error for rolling upgrade old client new server scenario |
Bug | Open | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 14/Feb/17 08:38 | 22/Feb/17 10:04 | java client | 0 | 2 | In Zookeeper rolling upgrade scenario where server is new but client is old, when sever sends error code which is not understood by the client, client throws NullPointerException. KeeperException.SystemErrorException should be thrown for all unknown error code. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 4 weeks, 1 day ago | 0|i3a25r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2694 | sync CLI command does not wait for result from server |
Bug | Closed | Major | Fixed | maoling | Mohammad Arshad | Mohammad Arshad | 14/Feb/17 08:12 | 16/Oct/19 14:59 | 23/May/19 12:43 | 3.5.0 | 3.6.0, 3.5.6 | java client | 0 | 3 | 0 | 24000 | sync CLI command does not wait for result from server. It returns immediately after invoking the sync's asynchronous API. Executing bellow command does not give the expected result {{<zkServer>/bin/zkCli.sh -server host:port sync /}} |
100% | 100% | 24000 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 43 weeks ago | 0|i3a247: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2693 | DOS attack on wchp/wchc four letter words (4lw) |
Bug | Closed | Blocker | Fixed | Michael Han | Patrick D. Hunt | Patrick D. Hunt | 13/Feb/17 22:33 | 06/Aug/18 07:58 | 07/Mar/17 01:26 | 3.4.0, 3.5.1, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | security, server | 0 | 8 | 0 | 600 | ZOOKEEPER-2726, ZOOKEEPER-2713 | The wchp/wchc four letter words can be exploited in a DOS attack on the ZK client port - typically 2181. The following POC attack was recently published on the web: https://vulners.com/exploitdb/EDB-ID:41277 The most straightforward way to block this attack is to not allow access to the client port to non-trusted clients - i.e. firewall the ZooKeeper service and only allow access to trusted applications using it for coordination. |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 1 week ago | 0|i3a187: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2692 | ZOOKEEPER-2686 Fix race condition in testWatchAutoResetWithPending |
Sub-task | Closed | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 10/Feb/17 12:53 | 15/Nov/17 22:16 | 21/Feb/17 16:40 | 3.4.9, 3.5.3, 3.6.0 | 3.4.10, 3.5.3, 3.6.0 | tests | 0 | 5 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 18 weeks ago | 0|i39x6n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2691 | recreateSocketAddresses may recreate the unreachable IP address |
Bug | Resolved | Minor | Fixed | Jiafu Jiang | Jiafu Jiang | Jiafu Jiang | 10/Feb/17 06:08 | 20/Jun/17 12:11 | 26/May/17 18:29 | 3.4.8, 3.4.9, 3.4.10, 3.5.0, 3.5.1, 3.5.2, 3.4.11 | 3.4.11 | 0 | 5 | Centos6.5 Java8 ZooKeeper3.4.8 |
The QuorumPeer$QuorumServer.recreateSocketAddress() is used to resolved the hostname to a new IP address(InetAddress) when any exception happens to the socket. It will be very useful when a hostname can be resolved to more than one IP address. But the problem is Java API InetAddress.getByName(String hostname) will always return the first IP address when the hostname can be resolved to more than one IP address, and the first IP address may be unreachable forever. For example, if a machine has two network interfaces: eth0, eth1, say eth0 has ip1, eth1 has ip2, the relationship between hostname and the IP addresses is set in /etc/hosts. When I "close" the eth0 by command "ifdown eth0", the InetAddress.getByName(String hostname) will still return ip1, which is unreachable forever. So I think it will be better to check the IP address by InetAddress.isReachable(long) and choose the reachable IP address. I have modified the ZooKeeper source code, and test the new code in my own environment, and it can work very well when I turn down some network interfaces using "ifdown" command. The original code is: {code:title=QuorumPeer.java|borderStyle=solid} public void recreateSocketAddresses() { InetAddress address = null; try { address = InetAddress.getByName(this.hostname); LOG.info("Resolved hostname: {} to address: {}", this.hostname, address); this.addr = new InetSocketAddress(address, this.port); if (this.electionPort > 0){ this.electionAddr = new InetSocketAddress(address, this.electionPort); } } catch (UnknownHostException ex) { LOG.warn("Failed to resolve address: {}", this.hostname, ex); // Have we succeeded in the past? if (this.addr != null) { // Yes, previously the lookup succeeded. Leave things as they are return; } // The hostname has never resolved. Create our InetSocketAddress(es) as unresolved this.addr = InetSocketAddress.createUnresolved(this.hostname, this.port); if (this.electionPort > 0){ this.electionAddr = InetSocketAddress.createUnresolved(this.hostname, this.electionPort); } } } {code} After my modification: {code:title=QuorumPeer.java|borderStyle=solid} public void recreateSocketAddresses() { InetAddress address = null; try { address = getReachableAddress(this.hostname); LOG.info("Resolved hostname: {} to address: {}", this.hostname, address); this.addr = new InetSocketAddress(address, this.port); if (this.electionPort > 0){ this.electionAddr = new InetSocketAddress(address, this.electionPort); } } catch (UnknownHostException ex) { LOG.warn("Failed to resolve address: {}", this.hostname, ex); // Have we succeeded in the past? if (this.addr != null) { // Yes, previously the lookup succeeded. Leave things as they are return; } // The hostname has never resolved. Create our InetSocketAddress(es) as unresolved this.addr = InetSocketAddress.createUnresolved(this.hostname, this.port); if (this.electionPort > 0){ this.electionAddr = InetSocketAddress.createUnresolved(this.hostname, this.electionPort); } } } public InetAddress getReachableAddress(String hostname) throws UnknownHostException { InetAddress[] addresses = InetAddress.getAllByName(hostname); for (InetAddress a : addresses) { try { if (a.isReachable(5000)) { return a; } } catch (IOException e) { LOG.warn("IP address {} is unreachable", a); } } // All the IP address is unreachable, just return the first one. return addresses[0]; } {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 39 weeks, 2 days ago | 0|i39wjz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2690 | Update documentation source for ZOOKEEPER-2574 |
Bug | Resolved | Minor | Fixed | Mark Fenes | Michael Han | Michael Han | 09/Feb/17 14:34 | 17/Nov/17 05:59 | 15/Nov/17 15:29 | 3.4.9, 3.5.2 | 3.5.4, 3.6.0, 3.4.12 | documentation | 0 | 5 | In ZOOKEEPER-2574, the documentation change (https://github.com/apache/zookeeper/pull/111/) was done directly on the generated document files instead of on the document source. This JIRA is created to track the work of porting the doc change on the doc source so the change of the doc will not get lost between releases. | newbie | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 17 weeks, 6 days ago |
Reviewed
|
0|i39v7r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2689 | Fix Kerberos Authentication related test cases |
Bug | Closed | Critical | Fixed | Rakesh Radhakrishnan | Mohammad Arshad | Mohammad Arshad | 09/Feb/17 10:47 | 19/Jul/17 08:55 | 13/Feb/17 13:13 | 3.4.9 | 3.4.10 | tests | 0 | 4 | ZOOKEEPER-1045 | Following test classes failed when branch-3.4 is run on java 6. {noformat} org.apache.zookeeper.server.quorum.auth.MiniKdcTest org.apache.zookeeper.server.quorum.auth.QuorumKerberosAuthTest org.apache.zookeeper.server.quorum.auth.QuorumKerberosHostBasedAuthTest {noformat} Error message is {{org/apache/kerby/kerberos/kerb/KrbException : Unsupported major.minor version 51.0}} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 5 weeks, 2 days ago | 0|i39urr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2688 | rmr leads to "Node does not exist" |
Bug | Open | Major | Unresolved | Unassigned | Gregory Reshetniak | Gregory Reshetniak | 08/Feb/17 09:23 | 09/Feb/17 09:03 | 3.4.9 | 0 | 3 | Issuing rmr /vault leads to Node does not exist: /vault/core/_lock/_c_e393e8a4d2c984178373be528a25404a-lock-0000000028 I know that rmr is getting deprecated in next version, but I think this might be cluster consistency bug. Please advice. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 6 weeks ago | 0|i39slr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2687 | Deadlock while shutting down the Leader server. |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 08/Feb/17 02:19 | 17/May/17 23:43 | 15/Feb/17 11:24 | 3.5.2, 3.6.0 | 3.5.3, 3.6.0 | server | 0 | 7 | Leader server enters into deadlock while shutting down. This happens some time only. The reason and deadlock flow is same as ZOOKEEPER-2380. shutdown was removed from synchronized block in ZOOKEEPER-2380 Now shutdown is called from synchronized block from another place. {code} // check leader running status if (!this.isRunning()) { shutdown("Unexpected internal error"); return; } {code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 5 weeks, 1 day ago | 0|i39rz3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2686 | Flaky Test: org.apache.zookeeper.test.WatcherTest. |
Test | Resolved | Major | Fixed | Michael Han | Michael Han | Michael Han | 07/Feb/17 17:01 | 10/Apr/17 13:25 | 10/Apr/17 13:25 | 3.4.9, 3.5.2, 3.6.0 | 3.4.11, 3.6.0 | tests | 0 | 1 | ZOOKEEPER-2692, ZOOKEEPER-2707 | ZOOKEEPER-1858, ZOOKEEPER-2135, ZOOKEEPER-2737 | Once in a while, these tests failed. {noformat} org.apache.zookeeper.test.WatcherTest.testWatchAutoResetWithPending org.apache.zookeeper.test.WatcherTest.testWatcherCorrectness org.apache.zookeeper.test.WatcherTest.testWatcherAutoResetDisabledWithLocal org.apache.zookeeper.test.WatcherTest.testWatcherAutoResetWithGlobal org.apache.zookeeper.test.WatcherTest.testWatcherCount org.apache.zookeeper.test.WatcherTest.testWatcherAutoResetDisabledWithGlobal org.apache.zookeeper.test.WatcherTest.testWatcherAutoResetWithLocal {noformat} |
flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 6 weeks, 2 days ago | 0|i39r4n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2685 | How to implement SSL in zookeeper-3.4.5-3 |
Bug | Resolved | Major | Won't Fix | Unassigned | shamim khan | shamim khan | 07/Feb/17 04:08 | 07/Feb/17 23:14 | 07/Feb/17 23:14 | 3.4.5 | java client | 0 | 2 | want to implement SSL in zookeeeper. But not able to implement as version issue. So how can we implement in zookeeper-3.4.5-3. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 6 weeks, 1 day ago | 0|i39pkf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2684 | Fix a crashing bug in the mixed workloads commit processor |
Bug | Resolved | Blocker | Fixed | Kfir Lev-Ari | Ryan Zhang | Ryan Zhang | 07/Feb/17 00:23 | 30/Jan/19 13:46 | 03/Nov/17 00:20 | 3.6.0 | 3.6.0 | server | 0 | 9 | 0 | 1200 | ZOOKEEPER-2024 | with pretty heavy load on a real cluster | We deployed our build with ZOOKEEPER-2024 and it quickly started to crash with the following error atla-buh-05-sr1.prod.twttr.net: 2017-01-18 22:24:42,305 - ERROR [CommitProcessor:2] -org.apache.zookeeper.server.quorum.CommitProcessor.run(CommitProcessor.java:268) – Got cxid 0x119fa expected 0x11fc5 for client session id 1009079ba470055 atla-buh-05-sr1.prod.twttr.net: 2017-01-18 22:32:04,746 - ERROR [CommitProcessor:2] -org.apache.zookeeper.server.quorum.CommitProcessor.run(CommitProcessor.java:268) – Got cxid 0x698 expected 0x928 for client session id 4002eeb3fd0009d atla-buh-05-sr1.prod.twttr.net: 2017-01-18 22:34:46,648 - ERROR [CommitProcessor:2] -org.apache.zookeeper.server.quorum.CommitProcessor.run(CommitProcessor.java:268) – Got cxid 0x8904 expected 0x8f34 for client session id 51b8905c90251 atla-buh-05-sr1.prod.twttr.net: 2017-01-18 22:43:46,834 - ERROR [CommitProcessor:2] -org.apache.zookeeper.server.quorum.CommitProcessor.run(CommitProcessor.java:268) – Got cxid 0x3a8d expected 0x3ebc for client session id 2051af11af900cc clearly something is not right in the new commit processor per session queue implementation. |
100% | 100% | 1200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 19 weeks, 6 days ago | 0|i39pbb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2683 | RaceConditionTest is flaky |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 06/Feb/17 13:01 | 17/May/17 23:43 | 13/Feb/17 16:10 | 3.5.2, 3.6.0 | 3.5.3, 3.6.0 | tests | 0 | 5 | ZOOKEEPER-2135 | *Error Message* {noformat} Leader failed to transition to LOOKING or FOLLOWING state {noformat} *Stacktrace* {noformat} junit.framework.AssertionFailedError: Leader failed to transition to LOOKING or FOLLOWING state at org.apache.zookeeper.server.quorum.RaceConditionTest.testRaceConditionBetweenLeaderAndAckRequestProcessor(RaceConditionTest.java:74) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.lang.Thread.run(Thread.java:745) {noformat} [CI Failures Reference|https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/279//testReport/org.apache.zookeeper.server.quorum/RaceConditionTest/testRaceConditionBetweenLeaderAndAckRequestProcessor/] |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 5 weeks, 3 days ago | 0|i39obj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2682 | Make it optional to fail build on test failure |
Improvement | Closed | Minor | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 05/Feb/17 06:27 | 31/Mar/17 05:01 | 06/Feb/17 21:36 | 3.4.10, 3.5.3, 3.6.0 | build, tests | 0 | 4 | Currently if there is a test failure, build is marked as failed and exits. I want to rerun the failed test cases instead of exiting. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 6 weeks, 2 days ago |
Reviewed
|
0|i39mtb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2681 | ConnectionState does not sync startup of ExhibitorEnsembleProvider and Zookeeper connection |
Bug | Resolved | Major | Invalid | Unassigned | Egor Ryashin | Egor Ryashin | 27/Jan/17 17:02 | 28/Jan/17 08:15 | 28/Jan/17 08:15 | 3.4.5 | java client | 0 | 1 | Use CuratorFrameworkFactory.Builder and specify ExhibitorEnsembleProvider. Call build() and start(). Internal ConnectionState.start() calls ensembleProvider.start() which should poll for hostnames to produce connectionString. Without waiting (for connectionString) ConnectionState calls zooKeeper.closeAndReset() and ClientCnxn is created with empty connectionString. That leads to lame zooKeeper sending requests to localhost. {noformat} 2017-01-27T22:56:17,618 INFO [Agents-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting 2017-01-27T22:56:17,619 INFO [Agents-0] org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString= sessionTimeout=60001 watcher=org.apache.curator.ConnectionState@4402fad2 2017-01-27T22:56:17,625 INFO [Agents-0-SendThread(127.0.0.1:2181)] org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2017-01-27T22:56:18,632 WARN [Agents-0-SendThread(127.0.0.1:2181)] org.apache.zookeeper.ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused: no further information at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_74] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:1.8.0_74] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) ~[zookeeper-3.4.5.jar:3.4.5-1392090] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068) [zookeeper-3.4.5.jar:3.4.5-1392090] 2017-01-27T22:56:19,733 INFO [Agents-0-SendThread(127.0.0.1:2181)] org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2017-01-27T22:56:19,807 INFO [Curator-ExhibitorEnsembleProvider-0] org.apache.curator.ensemble.exhibitor.ExhibitorEnsembleProvider - Connection string has changed. Old value (), new value (172.19.2.158:2181,172.19.2.15:2181,172.19.2.177:2181,172.19.2.4:2181,172.19.2.89:2181,172.19.2.72:2181) 2017-01-27T22:56:20,734 WARN [Agents-0-SendThread(127.0.0.1:2181)] org.apache.zookeeper.ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused: no further information at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_74] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:1.8.0_74] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) ~[zookeeper-3.4.5.jar:3.4.5-1392090] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068) [zookeeper-3.4.5.jar:3.4.5-1392090] 2017-01-27T22:56:21,835 INFO [Agents-0-SendThread(127.0.0.1:2181)] org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 7 weeks, 5 days ago | 0|i39bhr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2680 | Correct DataNode.getChildren() inconsistent behaviour. |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 27/Jan/17 01:03 | 31/Mar/17 05:01 | 10/Feb/17 12:29 | 3.4.9, 3.5.1 | 3.4.10, 3.5.3, 3.6.0 | server | 1 | 7 | ZOOKEEPER-2464 | DataNode.getChildren() API returns null and empty set if there are no children in it depending on when the API is called. DataNode.getChildren() API behavior should be changed and it should always return empty set if the node does not have any child *DataNode.getChildren() API Current Behavior:* # returns null initially When DataNode is created and no children are added yet, DataNode.getChildren() returns null # returns empty set after all the children are deleted: created a Node add a child delete the child DataNode.getChildren() returns empty set. After fix DataNode.getChildren() should return empty set in all the above cases. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 5 weeks, 3 days ago | 0|i39acv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2679 | ZOOKEEPER-3170 Flaky Test: org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade |
Sub-task | Resolved | Major | Cannot Reproduce | Andor Molnar | Michael Han | Michael Han | 26/Jan/17 16:57 | 25/Oct/18 11:01 | 25/Oct/18 11:01 | 3.4.10 | quorum, security, server, tests | 1 | 4 | This flaky test was introduced as part of ZOOKEEPER-1045, which fails regularly both in our internal test bot and upstream apache build bot. {noformat} Error Message waiting for server1being up Stacktrace junit.framework.AssertionFailedError: waiting for server1being up at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.restartServer(QuorumAuthUpgradeTest.java:232) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade(QuorumAuthUpgradeTest.java:204) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) Standard Output 2017-01-24 22:12:19,965 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testAuthLearnerAgainstNoAuthRequiredServer 2017-01-24 22:12:19,971 [myid:] - INFO [Thread-0:JUnit4ZKTestRunner$LoggedInvokeMethod@50] - RUNNING TEST METHOD testAuthLearnerAgainstNoAuthRequiredServer 2017-01-24 22:12:19,972 [myid:] - INFO [Thread-0:PortAssignment@32] - assigning port 11221 2017-01-24 22:12:19,973 [myid:] - INFO [Thread-0:PortAssignment@32] - assigning port 11222 2017-01-24 22:12:19,973 [myid:] - INFO [Thread-0:PortAssignment@32] - assigning port 11223 2017-01-24 22:12:19,973 [myid:] - INFO [Thread-0:PortAssignment@32] - assigning port 11224 2017-01-24 22:12:19,973 [myid:] - INFO [Thread-0:PortAssignment@32] - assigning port 11225 2017-01-24 22:12:19,973 [myid:] - INFO [Thread-0:PortAssignment@32] - assigning port 11226 2017-01-24 22:12:19,974 [myid:] - INFO [Thread-0:QuorumPeerTestBase$MainThread@81] - id = 0 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8717551953335097579.junit.dir clientPort = 11221 2017-01-24 22:12:19,981 [myid:] - INFO [Thread-0:QuorumPeerTestBase$MainThread@81] - id = 1 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4158500655544466289.junit.dir clientPort = 11224 2017-01-24 22:12:20,048 [myid:] - INFO [Thread-0:FourLetterWordMain@43] - connecting to 127.0.0.1 11221 2017-01-24 22:12:20,048 [myid:] - INFO [Thread-2:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4158500655544466289.junit.dir/zoo.cfg 2017-01-24 22:12:20,050 [myid:] - INFO [Thread-1:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8717551953335097579.junit.dir/zoo.cfg 2017-01-24 22:12:20,054 [myid:] - WARN [Thread-2:QuorumPeerConfig@327] - No server failure will be tolerated. You need at least 3 servers. 2017-01-24 22:12:20,056 [myid:] - INFO [Thread-2:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:20,059 [myid:] - WARN [Thread-1:QuorumPeerConfig@327] - No server failure will be tolerated. You need at least 3 servers. 2017-01-24 22:12:20,064 [myid:] - INFO [Thread-0:ClientBase@246] - server 127.0.0.1:11221 not up java.net.ConnectException: Connection refused 2017-01-24 22:12:20,070 [myid:] - INFO [Thread-1:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:20,071 [myid:1] - INFO [Thread-2:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:20,071 [myid:1] - INFO [Thread-2:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:20,071 [myid:1] - INFO [Thread-2:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:20,075 [myid:0] - INFO [Thread-1:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:20,075 [myid:0] - INFO [Thread-1:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:20,075 [myid:0] - INFO [Thread-1:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:20,210 [myid:1] - WARN [Thread-2:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:20,213 [myid:1] - INFO [Thread-2:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:20,222 [myid:0] - INFO [Thread-1:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:20,237 [myid:0] - INFO [Thread-1:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11221 2017-01-24 22:12:20,241 [myid:1] - INFO [Thread-2:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11224 2017-01-24 22:12:20,275 [myid:0] - INFO [Thread-1:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:20,275 [myid:1] - INFO [Thread-2:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:20,275 [myid:0] - INFO [Thread-1:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:20,275 [myid:1] - INFO [Thread-2:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:20,277 [myid:0] - INFO [Thread-1:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:12:20,277 [myid:1] - INFO [Thread-2:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:12:20,277 [myid:0] - INFO [Thread-1:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to false 2017-01-24 22:12:20,277 [myid:1] - INFO [Thread-2:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to false 2017-01-24 22:12:20,277 [myid:0] - INFO [Thread-1:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to false 2017-01-24 22:12:20,278 [myid:1] - INFO [Thread-2:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to false 2017-01-24 22:12:20,278 [myid:0] - INFO [Thread-1:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:12:20,278 [myid:1] - INFO [Thread-2:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:12:20,278 [myid:0] - INFO [Thread-1:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:12:20,278 [myid:1] - INFO [Thread-2:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:12:20,278 [myid:0] - INFO [Thread-1:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:12:20,278 [myid:1] - INFO [Thread-2:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:12:20,278 [myid:0] - INFO [Thread-1:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:20,279 [myid:1] - INFO [Thread-2:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:20,288 [myid:0] - INFO [Thread-1:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:12:20,288 [myid:1] - INFO [Thread-2:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:12:20,292 [myid:0] - INFO [Thread-1:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:12:20,292 [myid:1] - INFO [Thread-2:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:12:20,299 [myid:0] - INFO [Thread-1:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:20,299 [myid:1] - INFO [Thread-2:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:20,302 [myid:0] - INFO [Thread-1:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:20,302 [myid:1] - INFO [Thread-2:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:20,312 [myid:1] - INFO [Thread-4:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11226 2017-01-24 22:12:20,313 [myid:0] - INFO [Thread-5:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11223 2017-01-24 22:12:20,320 [myid:] - INFO [Thread-0:FourLetterWordMain@43] - connecting to 127.0.0.1 11221 2017-01-24 22:12:20,322 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:58467 2017-01-24 22:12:20,338 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:58467 2017-01-24 22:12:20,349 [myid:0] - INFO [Thread-6:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:58467 (no session established for client) 2017-01-24 22:12:20,345 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:QuorumPeer@781] - LOOKING 2017-01-24 22:12:20,345 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumPeer@781] - LOOKING 2017-01-24 22:12:20,352 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x0 2017-01-24 22:12:20,354 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:FastLeaderElection@744] - New election. My id = 0, proposed zxid=0x0 2017-01-24 22:12:20,357 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:20,357 [myid:1] - INFO [localhost/127.0.0.1:11226:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:48700 2017-01-24 22:12:20,356 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:20,354 [myid:0] - INFO [localhost/127.0.0.1:11223:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51927 2017-01-24 22:12:20,358 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:20,357 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:20,359 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:QuorumCnxManager@331] - Have smaller server identifier, so dropping the connection: (1, 0) 2017-01-24 22:12:20,363 [myid:1] - WARN [QuorumConnectionThread-[myid=1]-2:SaslQuorumAuthServer@135] - Failed to authenticate using SASL java.io.EOFException at java.io.DataInputStream.readFully(DataInputStream.java:197) at java.io.DataInputStream.readLong(DataInputStream.java:416) at org.apache.jute.BinaryInputArchive.readLong(BinaryInputArchive.java:67) at org.apache.zookeeper.server.quorum.auth.QuorumAuth.nextPacketIsAuth(QuorumAuth.java:91) at org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthServer.authenticate(SaslQuorumAuthServer.java:79) at org.apache.zookeeper.server.quorum.QuorumCnxManager.handleConnection(QuorumCnxManager.java:435) at org.apache.zookeeper.server.quorum.QuorumCnxManager.receiveConnection(QuorumCnxManager.java:373) at org.apache.zookeeper.server.quorum.QuorumCnxManager$QuorumConnectionReceiverThread.run(QuorumCnxManager.java:409) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:20,363 [myid:1] - WARN [QuorumConnectionThread-[myid=1]-2:SaslQuorumAuthServer@136] - Maintaining learner connection despite SASL authentication failure. server addr: /127.0.0.1:48700, quorum.auth.serverRequireSasl: false 2017-01-24 22:12:20,364 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:20,364 [myid:0] - INFO [localhost/127.0.0.1:11223:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51929 2017-01-24 22:12:20,364 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:20,364 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:20,365 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@901] - Exception when using channel: for id 1 my id = 0 error = java.net.SocketException: Broken pipe 2017-01-24 22:12:20,365 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:20,366 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:20,366 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-3:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:20,378 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:20,379 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:20,380 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:20,380 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:20,581 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:20,581 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumPeer@861] - LEADING 2017-01-24 22:12:20,585 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:Leader@62] - TCP NoDelay set to: true 2017-01-24 22:12:20,588 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Learner@85] - TCP NoDelay set to: true 2017-01-24 22:12:20,593 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:zookeeper.version=3.4.5--1, built on 01/25/2017 06:01 GMT 2017-01-24 22:12:20,593 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:host.name=ec2-unittest-ub12-m3-large-0e11.vpc.cloudera.com 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:java.version=1.7.0_75 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:java.vendor=Oracle Corporation 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:java.home=/data/jenkins/tools/hudson.model.JDK/Java_7/jre 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:java.class.path=/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/classes:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/antlr-2.7.6.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/checkstyle-5.0.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-beanutils-core-1.7.0.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-cli-1.0.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-collections-3.2.2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-io-2.4.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-lang-1.0.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-logging-1.0.3.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/google-collections-0.9.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/junit-4.8.1.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-admin-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-client-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-common-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-core-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-crypto-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-identity-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-server-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-simplekdc-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-util-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerby-asn1-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerby-config-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerby-pkix-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerby-util-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/log4j-1.2.17.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/mockito-all-1.8.2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/slf4j-api-1.7.14.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/slf4j-log4j12-1.7.14.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/classes:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/src/java/lib/ivy-2.2.0.jar:/data/jenkins/tools/hudson.tasks.Ant_AntInstallation/Ant_1.8.2/apache-ant-1.9.6/lib/ant.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/jline-2.11.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/log4j-1.2.16.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/netty-3.10.5.Final.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/slf4j-api-1.7.5.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/slf4j-log4j12-1.7.5.jar:/mnt/toolchain/clover-ant-4.0.3/lib/clover.jar:/data/jenkins/tools/hudson.tasks.Ant_AntInstallation/Ant_1.8.2/apache-ant-1.9.6/lib/ant-launcher.jar:/data/jenkins/tools/hudson.tasks.Ant_AntInstallation/Ant_1.8.2/apache-ant-1.9.6/lib/ant-junit.jar:/data/jenkins/tools/hudson.tasks.Ant_AntInstallation/Ant_1.8.2/apache-ant-1.9.6/lib/ant-junit4.jar 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:java.io.tmpdir=/tmp 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:java.compiler=<NA> 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:os.name=Linux 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:os.arch=amd64 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:os.version=3.2.0-33-virtual 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:user.name=jenkins 2017-01-24 22:12:20,594 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:user.home=/var/lib/jenkins 2017-01-24 22:12:20,595 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Environment@100] - Server environment:user.dir=/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7 2017-01-24 22:12:20,597 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4158500655544466289.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4158500655544466289.junit.dir/data/version-2 2017-01-24 22:12:20,597 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8717551953335097579.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8717551953335097579.junit.dir/data/version-2 2017-01-24 22:12:20,599 [myid:] - INFO [Thread-0:FourLetterWordMain@43] - connecting to 127.0.0.1 11221 2017-01-24 22:12:20,599 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:Leader@356] - LEADING - LEADER ELECTION TOOK - 247 2017-01-24 22:12:20,600 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:58471 2017-01-24 22:12:20,600 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:58471 2017-01-24 22:12:20,601 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 247 2017-01-24 22:12:20,604 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4158500655544466289.junit.dir/data/version-2/snapshot.0 2017-01-24 22:12:20,608 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:20,609 [myid:0] - INFO [Thread-7:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:58471 (no session established for client) 2017-01-24 22:12:20,619 [myid:1] - INFO [LearnerHandler-/127.0.0.1:57759:LearnerHandler@287] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@2b6c7d59 2017-01-24 22:12:20,621 [myid:1] - INFO [LearnerHandler-/127.0.0.1:57759:LearnerHandler@342] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2017-01-24 22:12:20,621 [myid:1] - INFO [LearnerHandler-/127.0.0.1:57759:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x100000000sent zxid of db as 0x0 2017-01-24 22:12:20,622 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Learner@329] - Getting a snapshot from leader 2017-01-24 22:12:20,626 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8717551953335097579.junit.dir/data/version-2/snapshot.0 2017-01-24 22:12:20,627 [myid:1] - INFO [LearnerHandler-/127.0.0.1:57759:LearnerHandler@477] - Received NEWLEADER-ACK message from 0 2017-01-24 22:12:20,627 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:Leader@934] - Have quorum of supporters, sids: [ 0,1 ]; starting up and setting last processed zxid: 0x100000000 2017-01-24 22:12:20,860 [myid:] - INFO [Thread-0:FourLetterWordMain@43] - connecting to 127.0.0.1 11221 2017-01-24 22:12:20,860 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:58473 2017-01-24 22:12:20,861 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:58473 2017-01-24 22:12:20,862 [myid:0] - INFO [Thread-10:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:12:20,863 [myid:0] - INFO [Thread-10:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:58473 (no session established for client) 2017-01-24 22:12:20,863 [myid:] - INFO [Thread-0:FourLetterWordMain@43] - connecting to 127.0.0.1 11224 2017-01-24 22:12:20,864 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11224:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:36288 2017-01-24 22:12:20,864 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11224:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:36288 2017-01-24 22:12:20,864 [myid:1] - INFO [Thread-11:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:12:20,865 [myid:1] - INFO [Thread-11:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:36288 (no session established for client) 2017-01-24 22:12:20,869 [myid:] - INFO [Thread-0:Environment@100] - Client environment:zookeeper.version=3.4.5--1, built on 01/25/2017 06:01 GMT 2017-01-24 22:12:20,869 [myid:] - INFO [Thread-0:Environment@100] - Client environment:host.name=ec2-unittest-ub12-m3-large-0e11.vpc.cloudera.com 2017-01-24 22:12:20,869 [myid:] - INFO [Thread-0:Environment@100] - Client environment:java.version=1.7.0_75 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:java.vendor=Oracle Corporation 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:java.home=/data/jenkins/tools/hudson.model.JDK/Java_7/jre 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:java.class.path=/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/classes:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/antlr-2.7.6.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/checkstyle-5.0.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-beanutils-core-1.7.0.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-cli-1.0.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-collections-3.2.2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-io-2.4.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-lang-1.0.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/commons-logging-1.0.3.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/google-collections-0.9.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/junit-4.8.1.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-admin-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-client-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-common-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-core-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-crypto-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-identity-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-server-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-simplekdc-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerb-util-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerby-asn1-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerby-config-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerby-pkix-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/kerby-util-1.0.0-RC2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/log4j-1.2.17.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/mockito-all-1.8.2.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/slf4j-api-1.7.14.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/lib/slf4j-log4j12-1.7.14.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/classes:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/src/java/lib/ivy-2.2.0.jar:/data/jenkins/tools/hudson.tasks.Ant_AntInstallation/Ant_1.8.2/apache-ant-1.9.6/lib/ant.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/jline-2.11.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/log4j-1.2.16.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/netty-3.10.5.Final.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/slf4j-api-1.7.5.jar:/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/lib/slf4j-log4j12-1.7.5.jar:/mnt/toolchain/clover-ant-4.0.3/lib/clover.jar:/data/jenkins/tools/hudson.tasks.Ant_AntInstallation/Ant_1.8.2/apache-ant-1.9.6/lib/ant-launcher.jar:/data/jenkins/tools/hudson.tasks.Ant_AntInstallation/Ant_1.8.2/apache-ant-1.9.6/lib/ant-junit.jar:/data/jenkins/tools/hudson.tasks.Ant_AntInstallation/Ant_1.8.2/apache-ant-1.9.6/lib/ant-junit4.jar 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:java.io.tmpdir=/tmp 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:java.compiler=<NA> 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:os.name=Linux 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:os.arch=amd64 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:os.version=3.2.0-33-virtual 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:user.name=jenkins 2017-01-24 22:12:20,870 [myid:] - INFO [Thread-0:Environment@100] - Client environment:user.home=/var/lib/jenkins 2017-01-24 22:12:20,871 [myid:] - INFO [Thread-0:Environment@100] - Client environment:user.dir=/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7 2017-01-24 22:12:20,871 [myid:] - INFO [Thread-0:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11221,127.0.0.1:11224 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@693a2c6a 2017-01-24 22:12:20,889 [myid:] - WARN [Thread-0-SendThread(localhost:11221):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:20,891 [myid:] - INFO [Thread-0-SendThread(localhost:11221):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11221 2017-01-24 22:12:20,891 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:58475 2017-01-24 22:12:20,891 [myid:] - INFO [Thread-0-SendThread(localhost:11221):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:58475, server: localhost/127.0.0.1:11221 2017-01-24 22:12:20,893 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:ZooKeeperServer@839] - Client attempting to establish new session at /127.0.0.1:58475 2017-01-24 22:12:20,896 [myid:1] - INFO [SyncThread:1:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:12:20,897 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Follower@119] - Got zxid 0x100000001 expected 0x1 2017-01-24 22:12:20,897 [myid:0] - INFO [SyncThread:0:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:12:20,906 [myid:0] - INFO [CommitProcessor:0:ZooKeeperServer@595] - Established session 0x59d440e8120000 with negotiated timeout 30000 for client /127.0.0.1:58475 2017-01-24 22:12:20,906 [myid:] - INFO [Thread-0-SendThread(localhost:11221):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:11221, sessionid = 0x59d440e8120000, negotiated timeout = 30000 2017-01-24 22:12:20,922 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x59d440e8120000 2017-01-24 22:12:20,925 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:58475 which had sessionid 0x59d440e8120000 2017-01-24 22:12:20,925 [myid:] - INFO [Thread-0:ZooKeeper@684] - Session: 0x59d440e8120000 closed 2017-01-24 22:12:20,926 [myid:] - INFO [Thread-0:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 14639 2017-01-24 22:12:20,926 [myid:] - INFO [Thread-0:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 39 2017-01-24 22:12:20,926 [myid:] - INFO [Thread-0:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testAuthLearnerAgainstNoAuthRequiredServer 2017-01-24 22:12:20,926 [myid:] - INFO [Thread-0-EventThread:ClientCnxn$EventThread@512] - EventThread shut down 2017-01-24 22:12:20,927 [myid:] - INFO [main:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221 2017-01-24 22:12:20,927 [myid:] - INFO [main:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:896) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdownAll(QuorumAuthTestBase.java:131) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.tearDown(QuorumAuthUpgradeTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2017-01-24 22:12:20,928 [myid:] - INFO [main:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:20,928 [myid:] - INFO [main:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:20,928 [myid:] - INFO [main:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:20,928 [myid:] - INFO [main:CommitProcessor@181] - Shutting down 2017-01-24 22:12:20,928 [myid:0] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:12:20,929 [myid:] - INFO [main:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:20,929 [myid:0] - INFO [CommitProcessor:0:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:20,929 [myid:] - INFO [main:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:20,929 [myid:0] - INFO [SyncThread:0:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:20,930 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:12:20,930 [myid:0] - ERROR [localhost/127.0.0.1:11223:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:12:20,930 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:20,930 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:20,931 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:20,931 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:20,931 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:20,932 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:20,931 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:20,932 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:20,933 [myid:] - INFO [main:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221 2017-01-24 22:12:20,933 [myid:] - INFO [main:QuorumBase@323] - Waiting for QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221 to exit thread 2017-01-24 22:12:21,930 [myid:0] - INFO [localhost/127.0.0.1:11223:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:12:22,648 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:22,648 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:22,648 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:22,648 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:22,648 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:CommitProcessor@181] - Shutting down 2017-01-24 22:12:22,648 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:22,648 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:22,648 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11221:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:12:22,651 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testAuthLearnerAgainstNoAuthRequiredServer 2017-01-24 22:12:22,651 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testAuthLearnerAgainstNoAuthRequiredServer 2017-01-24 22:12:22,652 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testRollingUpgrade 2017-01-24 22:12:22,653 [myid:] - INFO [Thread-12:JUnit4ZKTestRunner$LoggedInvokeMethod@50] - RUNNING TEST METHOD testRollingUpgrade 2017-01-24 22:12:22,653 [myid:] - INFO [Thread-12:PortAssignment@32] - assigning port 11227 2017-01-24 22:12:22,653 [myid:] - INFO [Thread-12:PortAssignment@32] - assigning port 11228 2017-01-24 22:12:22,653 [myid:] - INFO [Thread-12:PortAssignment@32] - assigning port 11229 2017-01-24 22:12:22,653 [myid:] - INFO [Thread-12:PortAssignment@32] - assigning port 11230 2017-01-24 22:12:22,653 [myid:] - INFO [Thread-12:PortAssignment@32] - assigning port 11231 2017-01-24 22:12:22,653 [myid:] - INFO [Thread-12:PortAssignment@32] - assigning port 11232 2017-01-24 22:12:22,653 [myid:] - INFO [Thread-12:PortAssignment@32] - assigning port 11233 2017-01-24 22:12:22,654 [myid:] - INFO [Thread-12:PortAssignment@32] - assigning port 11234 2017-01-24 22:12:22,654 [myid:] - INFO [Thread-12:PortAssignment@32] - assigning port 11235 2017-01-24 22:12:22,654 [myid:] - INFO [Thread-12:QuorumPeerTestBase$MainThread@81] - id = 0 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test1594648493062198412.junit.dir clientPort = 11227 2017-01-24 22:12:22,655 [myid:] - INFO [Thread-13:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test1594648493062198412.junit.dir/zoo.cfg 2017-01-24 22:12:22,655 [myid:] - INFO [Thread-12:QuorumPeerTestBase$MainThread@81] - id = 1 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir clientPort = 11230 2017-01-24 22:12:22,655 [myid:] - INFO [Thread-13:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:22,656 [myid:0] - INFO [Thread-13:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:22,656 [myid:0] - INFO [Thread-13:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:22,656 [myid:0] - INFO [Thread-13:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:22,656 [myid:] - INFO [Thread-12:QuorumPeerTestBase$MainThread@81] - id = 2 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2781541844609161589.junit.dir clientPort = 11233 2017-01-24 22:12:22,656 [myid:0] - WARN [Thread-13:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:22,657 [myid:0] - INFO [Thread-13:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:22,657 [myid:] - INFO [Thread-15:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2781541844609161589.junit.dir/zoo.cfg 2017-01-24 22:12:22,657 [myid:0] - INFO [Thread-13:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11227 2017-01-24 22:12:22,657 [myid:] - INFO [Thread-15:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:22,658 [myid:0] - INFO [Thread-13:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:22,658 [myid:0] - INFO [Thread-13:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:22,656 [myid:] - INFO [Thread-14:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir/zoo.cfg 2017-01-24 22:12:22,658 [myid:0] - INFO [Thread-13:QuorumPeer@1277] - QuorumPeer communication is not secured! 2017-01-24 22:12:22,658 [myid:0] - INFO [Thread-13:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:22,658 [myid:] - INFO [Thread-14:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:22,659 [myid:0] - INFO [Thread-13:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:22,659 [myid:1] - INFO [Thread-14:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:22,659 [myid:1] - INFO [Thread-14:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:22,659 [myid:1] - INFO [Thread-14:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:22,659 [myid:1] - WARN [Thread-14:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:22,660 [myid:1] - INFO [Thread-14:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:22,660 [myid:1] - INFO [Thread-14:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11230 2017-01-24 22:12:22,660 [myid:1] - INFO [Thread-14:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:22,660 [myid:1] - INFO [Thread-14:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:22,660 [myid:1] - INFO [Thread-14:QuorumPeer@1277] - QuorumPeer communication is not secured! 2017-01-24 22:12:22,661 [myid:1] - INFO [Thread-14:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:22,661 [myid:1] - INFO [Thread-14:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:22,658 [myid:2] - INFO [Thread-15:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:22,661 [myid:2] - INFO [Thread-15:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:22,661 [myid:2] - INFO [Thread-15:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:22,657 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11227 2017-01-24 22:12:22,662 [myid:2] - WARN [Thread-15:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:22,662 [myid:2] - INFO [Thread-15:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:22,660 [myid:0] - INFO [Thread-13:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:22,663 [myid:2] - INFO [Thread-15:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11233 2017-01-24 22:12:22,663 [myid:2] - INFO [Thread-15:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:22,663 [myid:2] - INFO [Thread-15:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:22,663 [myid:2] - INFO [Thread-15:QuorumPeer@1277] - QuorumPeer communication is not secured! 2017-01-24 22:12:22,663 [myid:2] - INFO [Thread-15:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:22,664 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42807 2017-01-24 22:12:22,664 [myid:2] - INFO [Thread-15:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:22,664 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:42807 2017-01-24 22:12:22,665 [myid:0] - INFO [Thread-16:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11229 2017-01-24 22:12:22,665 [myid:1] - INFO [Thread-14:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:22,666 [myid:0] - INFO [Thread-17:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42807 (no session established for client) 2017-01-24 22:12:22,666 [myid:2] - INFO [Thread-15:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:22,667 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@781] - LOOKING 2017-01-24 22:12:22,667 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FastLeaderElection@744] - New election. My id = 0, proposed zxid=0x0 2017-01-24 22:12:22,668 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,668 [myid:0] - WARN [WorkerSender[myid=0]:QuorumCnxManager@559] - Cannot open channel to 1 at election address localhost/127.0.0.1:11232 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:22,669 [myid:0] - WARN [WorkerSender[myid=0]:QuorumCnxManager@559] - Cannot open channel to 2 at election address localhost/127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:22,670 [myid:1] - INFO [Thread-18:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11232 2017-01-24 22:12:22,670 [myid:2] - INFO [Thread-19:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11235 2017-01-24 22:12:22,672 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@781] - LOOKING 2017-01-24 22:12:22,672 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@781] - LOOKING 2017-01-24 22:12:22,672 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x0 2017-01-24 22:12:22,673 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FastLeaderElection@744] - New election. My id = 2, proposed zxid=0x0 2017-01-24 22:12:22,674 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51824 2017-01-24 22:12:22,680 [myid:1] - INFO [localhost/127.0.0.1:11232:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:45173 2017-01-24 22:12:22,681 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,682 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,683 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,683 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,683 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51825 2017-01-24 22:12:22,683 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,684 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,684 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,685 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,685 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,685 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,686 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,686 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,686 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,686 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,687 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:22,887 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@861] - LEADING 2017-01-24 22:12:22,887 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:22,887 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test1594648493062198412.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test1594648493062198412.junit.dir/data/version-2 2017-01-24 22:12:22,887 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2781541844609161589.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2781541844609161589.junit.dir/data/version-2 2017-01-24 22:12:22,887 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 220 2017-01-24 22:12:22,887 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Leader@356] - LEADING - LEADER ELECTION TOOK - 214 2017-01-24 22:12:22,888 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2781541844609161589.junit.dir/data/version-2/snapshot.0 2017-01-24 22:12:22,887 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:22,889 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir/data/version-2 2017-01-24 22:12:22,889 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 217 2017-01-24 22:12:22,890 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42553:LearnerHandler@287] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@559c4c8e 2017-01-24 22:12:22,890 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42554:LearnerHandler@287] - Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@4f1e5ed9 2017-01-24 22:12:22,892 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42553:LearnerHandler@342] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2017-01-24 22:12:22,892 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42554:LearnerHandler@342] - Synchronizing with Follower sid: 1 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2017-01-24 22:12:22,892 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42553:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x100000000sent zxid of db as 0x0 2017-01-24 22:12:22,892 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Learner@329] - Getting a snapshot from leader 2017-01-24 22:12:22,892 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Learner@329] - Getting a snapshot from leader 2017-01-24 22:12:22,893 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42554:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x100000000sent zxid of db as 0x0 2017-01-24 22:12:22,894 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test1594648493062198412.junit.dir/data/version-2/snapshot.0 2017-01-24 22:12:22,894 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir/data/version-2/snapshot.0 2017-01-24 22:12:22,895 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42553:LearnerHandler@477] - Received NEWLEADER-ACK message from 0 2017-01-24 22:12:22,895 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Leader@934] - Have quorum of supporters, sids: [ 0,2 ]; starting up and setting last processed zxid: 0x100000000 2017-01-24 22:12:22,895 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42554:LearnerHandler@477] - Received NEWLEADER-ACK message from 1 2017-01-24 22:12:22,917 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11227 2017-01-24 22:12:22,918 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42815 2017-01-24 22:12:22,918 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:42815 2017-01-24 22:12:22,918 [myid:0] - INFO [Thread-23:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:12:22,919 [myid:0] - INFO [Thread-23:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42815 (no session established for client) 2017-01-24 22:12:22,919 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11230 2017-01-24 22:12:22,920 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52317 2017-01-24 22:12:22,920 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:52317 2017-01-24 22:12:22,921 [myid:1] - INFO [Thread-24:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:12:22,921 [myid:1] - INFO [Thread-24:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52317 (no session established for client) 2017-01-24 22:12:22,921 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:22,922 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45787 2017-01-24 22:12:22,923 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45787 2017-01-24 22:12:22,923 [myid:2] - INFO [Thread-25:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:12:22,924 [myid:2] - INFO [Thread-25:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45787 (no session established for client) 2017-01-24 22:12:22,924 [myid:] - INFO [Thread-12:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11227,127.0.0.1:11230,127.0.0.1:11233 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@36c0ae96 2017-01-24 22:12:22,925 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:22,925 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:22,926 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52319 2017-01-24 22:12:22,926 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52319, server: localhost/127.0.0.1:11230 2017-01-24 22:12:22,927 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:ZooKeeperServer@839] - Client attempting to establish new session at /127.0.0.1:52319 2017-01-24 22:12:22,930 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@119] - Got zxid 0x100000001 expected 0x1 2017-01-24 22:12:22,930 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@119] - Got zxid 0x100000001 expected 0x1 2017-01-24 22:12:22,930 [myid:2] - INFO [SyncThread:2:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:12:22,931 [myid:1] - INFO [SyncThread:1:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:12:22,931 [myid:0] - INFO [SyncThread:0:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:12:22,933 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@595] - Established session 0x159d440f0ed0000 with negotiated timeout 30000 for client /127.0.0.1:52319 2017-01-24 22:12:22,933 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:11230, sessionid = 0x159d440f0ed0000, negotiated timeout = 30000 2017-01-24 22:12:22,937 [myid:] - INFO [Thread-12:QuorumAuthUpgradeTest@229] - Restarting server myid=0 2017-01-24 22:12:22,937 [myid:] - INFO [Thread-12:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227 2017-01-24 22:12:22,937 [myid:] - INFO [Thread-12:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:896) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.restartServer(QuorumAuthUpgradeTest.java:230) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade(QuorumAuthUpgradeTest.java:194) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) 2017-01-24 22:12:22,938 [myid:] - INFO [Thread-12:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:22,938 [myid:] - INFO [Thread-12:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:22,938 [myid:] - INFO [Thread-12:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:22,938 [myid:] - INFO [Thread-12:CommitProcessor@181] - Shutting down 2017-01-24 22:12:22,938 [myid:] - INFO [Thread-12:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:22,938 [myid:0] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:12:22,938 [myid:0] - INFO [CommitProcessor:0:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:22,939 [myid:] - INFO [Thread-12:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:22,939 [myid:0] - INFO [SyncThread:0:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:22,939 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:12:22,939 [myid:0] - ERROR [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:12:22,939 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:22,940 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:22,940 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 2, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:22,940 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:22,941 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:22,940 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:22,941 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:22,940 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:22,941 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:22,941 [myid:] - INFO [Thread-12:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227 2017-01-24 22:12:22,941 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:22,942 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:22,940 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:22,942 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:22,940 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:22,942 [myid:] - INFO [Thread-12:QuorumBase@323] - Waiting for QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227 to exit thread 2017-01-24 22:12:22,943 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:22,943 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:23,380 [myid:0] - INFO [WorkerSender[myid=0]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:12:23,380 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:12:23,940 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:12:24,897 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:24,897 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:24,897 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:24,898 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:24,898 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:CommitProcessor@181] - Shutting down 2017-01-24 22:12:24,898 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:24,898 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:24,898 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:12:24,901 [myid:] - INFO [Thread-12:QuorumPeerTestBase$MainThread@81] - id = 0 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir clientPort = 11227 2017-01-24 22:12:24,901 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11227 2017-01-24 22:12:24,901 [myid:] - INFO [Thread-26:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/zoo.cfg 2017-01-24 22:12:24,902 [myid:] - INFO [Thread-12:ClientBase@246] - server 127.0.0.1:11227 not up java.net.ConnectException: Connection refused 2017-01-24 22:12:24,902 [myid:] - INFO [Thread-26:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:24,902 [myid:0] - INFO [Thread-26:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:24,902 [myid:0] - INFO [Thread-26:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:24,902 [myid:0] - INFO [Thread-26:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:24,903 [myid:0] - WARN [Thread-26:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:24,903 [myid:0] - INFO [Thread-26:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:24,903 [myid:0] - INFO [Thread-26:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11227 2017-01-24 22:12:24,904 [myid:0] - INFO [Thread-26:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:24,904 [myid:0] - INFO [Thread-26:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:24,904 [myid:0] - INFO [Thread-26:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:12:24,904 [myid:0] - INFO [Thread-26:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to false 2017-01-24 22:12:24,904 [myid:0] - INFO [Thread-26:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to false 2017-01-24 22:12:24,904 [myid:0] - INFO [Thread-26:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:12:24,904 [myid:0] - INFO [Thread-26:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:12:24,904 [myid:0] - INFO [Thread-26:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:12:24,904 [myid:0] - INFO [Thread-26:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:24,905 [myid:0] - INFO [Thread-26:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:12:24,905 [myid:0] - INFO [Thread-26:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:12:24,905 [myid:0] - INFO [Thread-26:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:24,906 [myid:0] - INFO [Thread-26:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:24,911 [myid:0] - INFO [Thread-27:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11229 2017-01-24 22:12:24,916 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@781] - LOOKING 2017-01-24 22:12:24,916 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FastLeaderElection@744] - New election. My id = 0, proposed zxid=0x0 2017-01-24 22:12:24,917 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:24,917 [myid:1] - INFO [localhost/127.0.0.1:11232:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:45181 2017-01-24 22:12:24,919 [myid:2] - INFO [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:50340 2017-01-24 22:12:24,919 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51836 2017-01-24 22:12:24,920 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:24,921 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-2:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:24,921 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-2:QuorumCnxManager@331] - Have smaller server identifier, so dropping the connection: (2, 0) 2017-01-24 22:12:24,922 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:QuorumCnxManager@331] - Have smaller server identifier, so dropping the connection: (1, 0) 2017-01-24 22:12:24,926 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51837 2017-01-24 22:12:24,933 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:24,933 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:24,934 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LEADING (my state) 2017-01-24 22:12:24,934 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:24,934 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:24,934 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LEADING (my state) 2017-01-24 22:12:24,935 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:24,937 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:24,937 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:24,937 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LEADING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:24,937 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LEADING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,138 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:25,138 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2 2017-01-24 22:12:25,138 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 222 2017-01-24 22:12:25,139 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:25,139 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42564:LearnerHandler@287] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@559c4c8e 2017-01-24 22:12:25,141 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42564:LearnerHandler@342] - Synchronizing with Follower sid: 0 maxCommittedLog=0x100000002 minCommittedLog=0x100000001 peerLastZxid=0x0 2017-01-24 22:12:25,141 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42564:LearnerHandler@405] - Unhandled proposal scenario 2017-01-24 22:12:25,141 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42564:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x100000002sent zxid of db as 0x100000002 2017-01-24 22:12:25,141 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Learner@329] - Getting a snapshot from leader 2017-01-24 22:12:25,143 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FileTxnSnapLog@281] - Snapshotting: 0x100000002 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2/snapshot.100000002 2017-01-24 22:12:25,145 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42564:LearnerHandler@477] - Received NEWLEADER-ACK message from 0 2017-01-24 22:12:25,152 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11227 2017-01-24 22:12:25,153 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42825 2017-01-24 22:12:25,153 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:42825 2017-01-24 22:12:25,153 [myid:0] - INFO [Thread-29:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:12:25,156 [myid:0] - INFO [Thread-29:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42825 (no session established for client) 2017-01-24 22:12:25,159 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@119] - Got zxid 0x100000003 expected 0x1 2017-01-24 22:12:25,159 [myid:0] - INFO [SyncThread:0:FileTxnLog@199] - Creating new log file: log.100000003 2017-01-24 22:12:25,160 [myid:] - INFO [Thread-12:QuorumAuthUpgradeTest@229] - Restarting server myid=1 2017-01-24 22:12:25,161 [myid:] - INFO [Thread-12:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 2017-01-24 22:12:25,161 [myid:] - INFO [Thread-12:Leader@491] - Shutting down 2017-01-24 22:12:25,161 [myid:] - INFO [Thread-12:Leader@497] - Shutdown called java.lang.Exception: shutdown Leader! reason: quorum Peer shutdown at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:497) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:893) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.restartServer(QuorumAuthUpgradeTest.java:230) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade(QuorumAuthUpgradeTest.java:195) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) 2017-01-24 22:12:25,161 [myid:] - INFO [Thread-12:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:25,161 [myid:2] - INFO [Thread-20:Leader$LearnerCnxAcceptor@318] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2017-01-24 22:12:25,162 [myid:] - INFO [Thread-12:SessionTrackerImpl@225] - Shutting down 2017-01-24 22:12:25,162 [myid:] - INFO [Thread-12:PrepRequestProcessor@761] - Shutting down 2017-01-24 22:12:25,163 [myid:] - INFO [Thread-12:ProposalRequestProcessor@88] - Shutting down 2017-01-24 22:12:25,163 [myid:] - INFO [Thread-12:CommitProcessor@181] - Shutting down 2017-01-24 22:12:25,163 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2017-01-24 22:12:25,163 [myid:] - INFO [Thread-12:Leader$ToBeAppliedRequestProcessor@656] - Shutting down 2017-01-24 22:12:25,163 [myid:] - INFO [Thread-12:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:25,163 [myid:2] - INFO [CommitProcessor:2:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:25,163 [myid:] - INFO [Thread-12:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:25,164 [myid:2] - INFO [SyncThread:2:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:25,164 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@90] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:86) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:12:25,164 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@90] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:86) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:12:25,165 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:25,165 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52319 which had sessionid 0x159d440f0ed0000 2017-01-24 22:12:25,165 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:25,166 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:25,166 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:25,166 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:CommitProcessor@181] - Shutting down 2017-01-24 22:12:25,166 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:25,166 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:25,166 [myid:1] - INFO [FollowerRequestProcessor:1:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:12:25,164 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42564:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:42564 ******** 2017-01-24 22:12:25,166 [myid:1] - INFO [SyncThread:1:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:25,166 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:12:25,166 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42553:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:42553 ******** 2017-01-24 22:12:25,166 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:25,167 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@867] - Unexpected exception java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:451) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:864) 2017-01-24 22:12:25,167 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Leader@491] - Shutting down 2017-01-24 22:12:25,168 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:12:25,165 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42554:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:42554 ******** 2017-01-24 22:12:25,165 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:25,168 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42554:LearnerHandler@610] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:608) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:601) 2017-01-24 22:12:25,167 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42553:LearnerHandler@610] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:608) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:601) 2017-01-24 22:12:25,167 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@781] - LOOKING 2017-01-24 22:12:25,171 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir/data/version-2/snapshot.0 2017-01-24 22:12:25,173 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x100000003 2017-01-24 22:12:25,174 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,167 [myid:1] - INFO [CommitProcessor:1:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:25,174 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,167 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42564:LearnerHandler@610] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:608) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:601) 2017-01-24 22:12:25,174 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:25,175 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:25,176 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:25,174 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:25,169 [myid:2] - ERROR [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:12:25,169 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:25,176 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:25,177 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:25,177 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:25,177 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:25,175 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 0, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:25,177 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:25,175 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:25,178 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:25,178 [myid:] - INFO [Thread-12:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 2017-01-24 22:12:25,178 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), FOLLOWING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,178 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:25,179 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:25,179 [myid:] - INFO [Thread-12:QuorumBase@323] - Waiting for QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 to exit thread 2017-01-24 22:12:25,180 [myid:] - INFO [Thread-12:QuorumPeerTestBase$MainThread@81] - id = 2 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4813485841779601602.junit.dir clientPort = 11233 2017-01-24 22:12:25,179 [myid:0] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:12:25,179 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:CommitProcessor@181] - Shutting down 2017-01-24 22:12:25,178 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:25,183 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:25,178 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:25,183 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:25,183 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:25,183 [myid:0] - INFO [CommitProcessor:0:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:25,182 [myid:] - INFO [Thread-30:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4813485841779601602.junit.dir/zoo.cfg 2017-01-24 22:12:25,182 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:25,183 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:25,184 [myid:0] - INFO [SyncThread:0:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:25,185 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@781] - LOOKING 2017-01-24 22:12:25,185 [myid:] - INFO [Thread-30:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:25,186 [myid:2] - INFO [Thread-30:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:25,186 [myid:2] - INFO [Thread-30:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:25,186 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2/snapshot.100000002 2017-01-24 22:12:25,186 [myid:2] - INFO [Thread-30:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:25,186 [myid:2] - WARN [Thread-30:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:25,187 [myid:2] - INFO [Thread-30:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:25,187 [myid:2] - INFO [Thread-30:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11233 2017-01-24 22:12:25,188 [myid:] - INFO [Thread-12:ClientBase@246] - server 127.0.0.1:11233 not up java.net.ConnectException: Connection refused 2017-01-24 22:12:25,187 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FastLeaderElection@744] - New election. My id = 0, proposed zxid=0x100000003 2017-01-24 22:12:25,188 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,188 [myid:0] - WARN [WorkerSender[myid=0]:QuorumCnxManager@559] - Cannot open channel to 2 at election address localhost/127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:25,189 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,189 [myid:2] - INFO [Thread-30:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:25,189 [myid:2] - INFO [Thread-30:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:25,189 [myid:2] - INFO [Thread-30:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:12:25,189 [myid:2] - INFO [Thread-30:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to false 2017-01-24 22:12:25,189 [myid:2] - INFO [Thread-30:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to false 2017-01-24 22:12:25,190 [myid:2] - INFO [Thread-30:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:12:25,190 [myid:2] - INFO [Thread-30:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:12:25,190 [myid:2] - INFO [Thread-30:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:12:25,190 [myid:2] - INFO [Thread-30:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:25,190 [myid:2] - INFO [Thread-30:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:12:25,190 [myid:2] - INFO [Thread-30:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:12:25,191 [myid:2] - INFO [Thread-30:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:25,192 [myid:2] - INFO [Thread-30:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:25,193 [myid:2] - INFO [Thread-31:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11235 2017-01-24 22:12:25,196 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@781] - LOOKING 2017-01-24 22:12:25,196 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FastLeaderElection@744] - New election. My id = 2, proposed zxid=0x0 2017-01-24 22:12:25,197 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51842 2017-01-24 22:12:25,197 [myid:1] - INFO [localhost/127.0.0.1:11232:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:45190 2017-01-24 22:12:25,198 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,202 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-2:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:25,204 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-1:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:25,210 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,210 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,210 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,212 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,212 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,212 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,213 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,213 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,214 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,214 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,214 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,214 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:25,414 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@861] - LEADING 2017-01-24 22:12:25,414 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir/data/version-2 2017-01-24 22:12:25,415 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Leader@356] - LEADING - LEADER ELECTION TOOK - 245 2017-01-24 22:12:25,415 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir/data/version-2/snapshot.0 2017-01-24 22:12:25,416 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:25,416 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2 2017-01-24 22:12:25,416 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 231 2017-01-24 22:12:25,417 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:25,417 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FileTxnSnapLog@281] - Snapshotting: 0x100000003 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8525006967275020093.junit.dir/data/version-2/snapshot.100000003 2017-01-24 22:12:25,415 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:25,417 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4813485841779601602.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4813485841779601602.junit.dir/data/version-2 2017-01-24 22:12:25,417 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 221 2017-01-24 22:12:25,418 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:25,422 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50169:LearnerHandler@287] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@752a0b2f 2017-01-24 22:12:25,422 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50170:LearnerHandler@287] - Follower sid: 2 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@1b4476c1 2017-01-24 22:12:25,424 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50169:LearnerHandler@342] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x100000003 2017-01-24 22:12:25,424 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50169:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x100000003 zxid of leader is 0x200000000sent zxid of db as 0x100000003 2017-01-24 22:12:25,424 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Learner@329] - Getting a snapshot from leader 2017-01-24 22:12:25,425 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50170:LearnerHandler@342] - Synchronizing with Follower sid: 2 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2017-01-24 22:12:25,426 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Learner@329] - Getting a snapshot from leader 2017-01-24 22:12:25,426 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50170:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x200000000sent zxid of db as 0x100000003 2017-01-24 22:12:25,429 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FileTxnSnapLog@281] - Snapshotting: 0x100000003 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4813485841779601602.junit.dir/data/version-2/snapshot.100000003 2017-01-24 22:12:25,429 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FileTxnSnapLog@281] - Snapshotting: 0x100000003 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2/snapshot.100000003 2017-01-24 22:12:25,432 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50170:LearnerHandler@477] - Received NEWLEADER-ACK message from 2 2017-01-24 22:12:25,432 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Leader@934] - Have quorum of supporters, sids: [ 1,2 ]; starting up and setting last processed zxid: 0x200000000 2017-01-24 22:12:25,433 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50169:LearnerHandler@477] - Received NEWLEADER-ACK message from 0 2017-01-24 22:12:25,446 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:25,446 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45802 2017-01-24 22:12:25,447 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45802 2017-01-24 22:12:25,447 [myid:2] - INFO [Thread-35:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:12:25,448 [myid:2] - INFO [Thread-35:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45802 (no session established for client) 2017-01-24 22:12:25,686 [myid:0] - INFO [WorkerSender[myid=0]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:12:25,687 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:12:26,176 [myid:2] - INFO [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:12:26,212 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:26,212 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:26,212 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45803 2017-01-24 22:12:26,212 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45803, server: localhost/127.0.0.1:11233 2017-01-24 22:12:26,213 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:ZooKeeperServer@832] - Client attempting to renew session 0x159d440f0ed0000 at /127.0.0.1:45803 2017-01-24 22:12:26,213 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:Learner@107] - Revalidating client: 0x159d440f0ed0000 2017-01-24 22:12:26,215 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:ZooKeeperServer@595] - Established session 0x159d440f0ed0000 with negotiated timeout 30000 for client /127.0.0.1:45803 2017-01-24 22:12:26,215 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:11233, sessionid = 0x159d440f0ed0000, negotiated timeout = 30000 2017-01-24 22:12:26,216 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Follower@119] - Got zxid 0x200000001 expected 0x1 2017-01-24 22:12:26,217 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@119] - Got zxid 0x200000001 expected 0x1 2017-01-24 22:12:26,217 [myid:2] - INFO [SyncThread:2:FileTxnLog@199] - Creating new log file: log.200000001 2017-01-24 22:12:26,219 [myid:] - INFO [Thread-12:QuorumAuthUpgradeTest@229] - Restarting server myid=2 2017-01-24 22:12:26,220 [myid:] - INFO [Thread-12:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 2017-01-24 22:12:26,220 [myid:] - INFO [Thread-12:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:896) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.restartServer(QuorumAuthUpgradeTest.java:230) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade(QuorumAuthUpgradeTest.java:196) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) 2017-01-24 22:12:26,220 [myid:] - INFO [Thread-12:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45803 which had sessionid 0x159d440f0ed0000 2017-01-24 22:12:26,221 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:26,221 [myid:] - INFO [Thread-12:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:26,221 [myid:] - INFO [Thread-12:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:26,221 [myid:] - INFO [Thread-12:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:26,221 [myid:] - INFO [Thread-12:CommitProcessor@181] - Shutting down 2017-01-24 22:12:26,221 [myid:2] - INFO [FollowerRequestProcessor:2:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:12:26,221 [myid:2] - INFO [CommitProcessor:2:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:26,221 [myid:] - INFO [Thread-12:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:26,222 [myid:] - INFO [Thread-12:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:26,222 [myid:2] - INFO [SyncThread:2:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:26,223 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:12:26,223 [myid:2] - ERROR [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:12:26,223 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 0, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:26,224 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:26,224 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:26,224 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:26,225 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:26,225 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:26,226 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:26,225 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:26,224 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:26,226 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:26,225 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:26,227 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:26,225 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:26,227 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:26,227 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:26,228 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:26,228 [myid:] - INFO [Thread-12:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 2017-01-24 22:12:26,229 [myid:] - INFO [Thread-12:QuorumBase@323] - Waiting for QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 to exit thread 2017-01-24 22:12:26,611 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:26,611 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:26,612 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42834 2017-01-24 22:12:26,612 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42834, server: localhost/127.0.0.1:11227 2017-01-24 22:12:26,612 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:ZooKeeperServer@832] - Client attempting to renew session 0x159d440f0ed0000 at /127.0.0.1:42834 2017-01-24 22:12:26,613 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:Learner@107] - Revalidating client: 0x159d440f0ed0000 2017-01-24 22:12:26,614 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@595] - Established session 0x159d440f0ed0000 with negotiated timeout 30000 for client /127.0.0.1:42834 2017-01-24 22:12:26,614 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:11227, sessionid = 0x159d440f0ed0000, negotiated timeout = 30000 2017-01-24 22:12:27,224 [myid:2] - INFO [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:12:27,435 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:27,435 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:27,435 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:27,435 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:27,436 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:CommitProcessor@181] - Shutting down 2017-01-24 22:12:27,436 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:27,436 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:27,436 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:12:27,447 [myid:] - INFO [Thread-12:QuorumPeerTestBase$MainThread@81] - id = 2 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir clientPort = 11233 2017-01-24 22:12:27,452 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:27,453 [myid:] - INFO [Thread-12:ClientBase@246] - server 127.0.0.1:11233 not up java.net.ConnectException: Connection refused 2017-01-24 22:12:27,453 [myid:] - INFO [Thread-36:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir/zoo.cfg 2017-01-24 22:12:27,454 [myid:] - INFO [Thread-36:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:27,454 [myid:2] - INFO [Thread-36:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:27,454 [myid:2] - INFO [Thread-36:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:27,454 [myid:2] - INFO [Thread-36:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:27,455 [myid:2] - WARN [Thread-36:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:27,455 [myid:2] - INFO [Thread-36:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:27,456 [myid:2] - INFO [Thread-36:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11233 2017-01-24 22:12:27,456 [myid:2] - INFO [Thread-36:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:27,456 [myid:2] - INFO [Thread-36:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:27,457 [myid:2] - INFO [Thread-36:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:12:27,457 [myid:2] - INFO [Thread-36:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to false 2017-01-24 22:12:27,458 [myid:2] - INFO [Thread-36:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to false 2017-01-24 22:12:27,458 [myid:2] - INFO [Thread-36:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:12:27,458 [myid:2] - INFO [Thread-36:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:12:27,459 [myid:2] - INFO [Thread-36:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:12:27,459 [myid:2] - INFO [Thread-36:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:27,459 [myid:2] - INFO [Thread-36:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:12:27,459 [myid:2] - INFO [Thread-36:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:12:27,460 [myid:2] - INFO [Thread-36:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:27,461 [myid:2] - INFO [Thread-36:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:27,479 [myid:2] - INFO [Thread-37:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11235 2017-01-24 22:12:27,481 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@781] - LOOKING 2017-01-24 22:12:27,481 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FastLeaderElection@744] - New election. My id = 2, proposed zxid=0x0 2017-01-24 22:12:27,482 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51850 2017-01-24 22:12:27,483 [myid:1] - INFO [localhost/127.0.0.1:11232:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:45198 2017-01-24 22:12:27,483 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,488 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-1:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:27,493 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-2:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:27,496 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,497 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LEADING (my state) 2017-01-24 22:12:27,501 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LEADING (n.state), 1 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,501 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,502 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEPoch), LEADING (my state) 2017-01-24 22:12:27,502 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,502 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LEADING (n.state), 1 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,502 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:27,502 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:27,503 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), FOLLOWING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,503 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), FOLLOWING (n.state), 0 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,703 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:27,704 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:27,704 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir/data/version-2 2017-01-24 22:12:27,704 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45808 2017-01-24 22:12:27,704 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 223 2017-01-24 22:12:27,705 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45808 2017-01-24 22:12:27,705 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:27,706 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50178:LearnerHandler@287] - Follower sid: 2 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@1b4476c1 2017-01-24 22:12:27,709 [myid:2] - INFO [Thread-38:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45808 (no session established for client) 2017-01-24 22:12:27,710 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50178:LearnerHandler@342] - Synchronizing with Follower sid: 2 maxCommittedLog=0x200000001 minCommittedLog=0x200000001 peerLastZxid=0x0 2017-01-24 22:12:27,710 [myid:1] - WARN [LearnerHandler-/127.0.0.1:50178:LearnerHandler@405] - Unhandled proposal scenario 2017-01-24 22:12:27,710 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50178:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x200000001sent zxid of db as 0x200000001 2017-01-24 22:12:27,710 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Learner@329] - Getting a snapshot from leader 2017-01-24 22:12:27,712 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FileTxnSnapLog@281] - Snapshotting: 0x200000001 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir/data/version-2/snapshot.200000001 2017-01-24 22:12:27,713 [myid:1] - INFO [LearnerHandler-/127.0.0.1:50178:LearnerHandler@477] - Received NEWLEADER-ACK message from 2 2017-01-24 22:12:27,935 [myid:2] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:12:27,959 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:27,960 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45810 2017-01-24 22:12:27,960 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45810 2017-01-24 22:12:27,961 [myid:2] - INFO [Thread-40:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:12:27,962 [myid:2] - INFO [Thread-40:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45810 (no session established for client) 2017-01-24 22:12:27,964 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Follower@119] - Got zxid 0x200000002 expected 0x1 2017-01-24 22:12:27,964 [myid:2] - INFO [SyncThread:2:FileTxnLog@199] - Creating new log file: log.200000002 2017-01-24 22:12:27,967 [myid:] - INFO [Thread-12:QuorumAuthUpgradeTest@229] - Restarting server myid=0 2017-01-24 22:12:27,967 [myid:] - INFO [Thread-12:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230 2017-01-24 22:12:27,967 [myid:] - INFO [Thread-12:Leader@491] - Shutting down 2017-01-24 22:12:27,967 [myid:] - INFO [Thread-12:Leader@497] - Shutdown called java.lang.Exception: shutdown Leader! reason: quorum Peer shutdown at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:497) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:893) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.restartServer(QuorumAuthUpgradeTest.java:230) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade(QuorumAuthUpgradeTest.java:203) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) 2017-01-24 22:12:27,968 [myid:] - INFO [Thread-12:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:27,968 [myid:] - INFO [Thread-12:SessionTrackerImpl@225] - Shutting down 2017-01-24 22:12:27,968 [myid:] - INFO [Thread-12:PrepRequestProcessor@761] - Shutting down 2017-01-24 22:12:27,968 [myid:] - INFO [Thread-12:ProposalRequestProcessor@88] - Shutting down 2017-01-24 22:12:27,968 [myid:] - INFO [Thread-12:CommitProcessor@181] - Shutting down 2017-01-24 22:12:27,969 [myid:1] - INFO [CommitProcessor:1:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:27,969 [myid:] - INFO [Thread-12:Leader$ToBeAppliedRequestProcessor@656] - Shutting down 2017-01-24 22:12:27,969 [myid:] - INFO [Thread-12:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:27,969 [myid:] - INFO [Thread-12:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:27,968 [myid:1] - INFO [Thread-32:Leader$LearnerCnxAcceptor@318] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2017-01-24 22:12:27,971 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2017-01-24 22:12:27,971 [myid:1] - INFO [SyncThread:1:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:27,972 [myid:1] - WARN [LearnerHandler-/127.0.0.1:50170:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:50170 ******** 2017-01-24 22:12:27,972 [myid:1] - WARN [LearnerHandler-/127.0.0.1:50170:LearnerHandler@610] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:608) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:601) 2017-01-24 22:12:27,973 [myid:1] - WARN [LearnerHandler-/127.0.0.1:50178:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:50178 ******** 2017-01-24 22:12:27,973 [myid:1] - WARN [LearnerHandler-/127.0.0.1:50178:LearnerHandler@610] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:608) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:601) 2017-01-24 22:12:27,973 [myid:1] - WARN [LearnerHandler-/127.0.0.1:50169:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:50169 ******** 2017-01-24 22:12:27,974 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@90] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:86) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:12:27,974 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:27,974 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:12:27,975 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Follower@90] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:86) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:12:27,975 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@867] - Unexpected exception java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:451) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:864) 2017-01-24 22:12:27,975 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Leader@491] - Shutting down 2017-01-24 22:12:27,975 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:12:27,975 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:27,975 [myid:1] - ERROR [localhost/127.0.0.1:11232:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:12:27,976 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:27,977 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:27,977 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:27,978 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:CommitProcessor@181] - Shutting down 2017-01-24 22:12:27,978 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:27,977 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42834 which had sessionid 0x159d440f0ed0000 2017-01-24 22:12:27,977 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 1, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:27,979 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:27,977 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 0, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:27,977 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:27,980 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:27,979 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:27,980 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:27,980 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:27,980 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:CommitProcessor@181] - Shutting down 2017-01-24 22:12:27,980 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:27,980 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:27,979 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:27,979 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 1, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:27,981 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:27,978 [myid:] - INFO [Thread-12:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230 2017-01-24 22:12:27,978 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:27,981 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:27,978 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 2, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:27,982 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:27,978 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:27,978 [myid:2] - INFO [CommitProcessor:2:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:27,978 [myid:2] - INFO [FollowerRequestProcessor:2:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:12:27,982 [myid:2] - INFO [SyncThread:2:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:27,982 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:27,986 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:27,981 [myid:] - INFO [Thread-12:QuorumBase@323] - Waiting for QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230 to exit thread 2017-01-24 22:12:27,980 [myid:0] - INFO [SyncThread:0:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:27,980 [myid:0] - INFO [CommitProcessor:0:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:27,980 [myid:0] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:12:27,979 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:27,987 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:27,986 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@781] - LOOKING 2017-01-24 22:12:27,985 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@781] - LOOKING 2017-01-24 22:12:27,988 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir/data/version-2/snapshot.200000001 2017-01-24 22:12:27,988 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2/snapshot.100000003 2017-01-24 22:12:27,990 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FastLeaderElection@744] - New election. My id = 2, proposed zxid=0x200000002 2017-01-24 22:12:27,990 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:27,991 [myid:] - INFO [Thread-12:QuorumPeerTestBase$MainThread@81] - id = 1 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir clientPort = 11230 2017-01-24 22:12:27,991 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,992 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FastLeaderElection@744] - New election. My id = 0, proposed zxid=0x200000002 2017-01-24 22:12:27,992 [myid:2] - WARN [WorkerSender[myid=2]:QuorumCnxManager@559] - Cannot open channel to 1 at election address localhost/127.0.0.1:11232 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:27,992 [myid:] - INFO [Thread-41:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/zoo.cfg 2017-01-24 22:12:27,992 [myid:0] - WARN [WorkerSender[myid=0]:QuorumCnxManager@559] - Cannot open channel to 1 at election address localhost/127.0.0.1:11232 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:27,992 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,994 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,994 [myid:0] - WARN [WorkerSender[myid=0]:QuorumCnxManager@559] - Cannot open channel to 1 at election address localhost/127.0.0.1:11232 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:27,993 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,994 [myid:] - INFO [Thread-41:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:27,995 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11230 2017-01-24 22:12:27,995 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:27,995 [myid:1] - INFO [Thread-41:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:27,995 [myid:1] - INFO [Thread-41:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:27,995 [myid:1] - INFO [Thread-41:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:27,996 [myid:1] - WARN [Thread-41:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:28,001 [myid:1] - INFO [Thread-41:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:28,001 [myid:1] - INFO [Thread-41:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11230 2017-01-24 22:12:28,002 [myid:1] - INFO [Thread-41:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:28,002 [myid:1] - INFO [Thread-41:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:28,002 [myid:1] - INFO [Thread-41:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:12:28,002 [myid:1] - INFO [Thread-41:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to false 2017-01-24 22:12:28,002 [myid:1] - INFO [Thread-41:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to true 2017-01-24 22:12:28,002 [myid:1] - INFO [Thread-41:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:12:28,002 [myid:1] - INFO [Thread-41:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:12:28,003 [myid:1] - INFO [Thread-41:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:12:28,003 [myid:1] - INFO [Thread-41:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:28,003 [myid:1] - INFO [Thread-41:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:12:28,003 [myid:1] - INFO [Thread-41:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:12:28,004 [myid:1] - INFO [Thread-41:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:27,995 [myid:] - INFO [Thread-12:ClientBase@246] - server 127.0.0.1:11230 not up java.net.ConnectException: Connection refused 2017-01-24 22:12:28,000 [myid:2] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2017-01-24 22:12:28,000 [myid:1] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2017-01-24 22:12:28,006 [myid:1] - INFO [Thread-41:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:28,012 [myid:1] - INFO [Thread-42:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11232 2017-01-24 22:12:28,013 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@781] - LOOKING 2017-01-24 22:12:28,013 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x0 2017-01-24 22:12:28,014 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51859 2017-01-24 22:12:28,014 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,014 [myid:2] - INFO [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:50365 2017-01-24 22:12:28,021 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-2:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:12:28,021 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:12:28,148 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:12:28,148 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-3:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:12:28,149 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:51859 2017-01-24 22:12:28,149 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-3:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:50365 2017-01-24 22:12:28,150 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-2:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11235, status: SUCCESS 2017-01-24 22:12:28,150 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-2:QuorumCnxManager@331] - Have smaller server identifier, so dropping the connection: (2, 1) 2017-01-24 22:12:28,152 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11229, status: SUCCESS 2017-01-24 22:12:28,157 [myid:1] - INFO [localhost/127.0.0.1:11232:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:45208 2017-01-24 22:12:28,157 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,158 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,158 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-2:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:28,159 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 1 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,160 [myid:2] - INFO [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:50367 2017-01-24 22:12:28,160 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,160 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 1 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,160 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:12:28,167 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 1 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,167 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,179 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:12:28,180 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-2:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:12:28,180 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-2:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:50367 2017-01-24 22:12:28,181 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:28,181 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:28,181 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:28,182 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:28,183 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-3:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:28,181 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:28,184 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:28,182 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:28,184 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:28,182 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11235, status: SUCCESS 2017-01-24 22:12:28,182 [myid:1] - INFO [localhost/127.0.0.1:11232:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:45210 2017-01-24 22:12:28,184 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:QuorumCnxManager@331] - Have smaller server identifier, so dropping the connection: (2, 1) 2017-01-24 22:12:28,189 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 1 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,189 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,193 [myid:1] - ERROR [LearnerHandler-/127.0.0.1:57759:LearnerHandler@585] - Unexpected exception causing shutdown while sock still open java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:196) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:499) 2017-01-24 22:12:28,193 [myid:1] - WARN [LearnerHandler-/127.0.0.1:57759:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:57759 ******** 2017-01-24 22:12:28,212 [myid:2] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:12:28,215 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:12:28,256 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11230 2017-01-24 22:12:28,257 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52351 2017-01-24 22:12:28,257 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:52351 2017-01-24 22:12:28,258 [myid:1] - INFO [Thread-43:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52351 (no session established for client) 2017-01-24 22:12:28,360 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:28,361 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2 2017-01-24 22:12:28,361 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 373 2017-01-24 22:12:28,363 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Learner@233] - Unexpected exception, tries=0, connecting to localhost/127.0.0.1:11234 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:225) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:72) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:12:28,390 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@861] - LEADING 2017-01-24 22:12:28,390 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir/data/version-2 2017-01-24 22:12:28,390 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Leader@356] - LEADING - LEADER ELECTION TOOK - 402 2017-01-24 22:12:28,390 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:28,390 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2 2017-01-24 22:12:28,390 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 377 2017-01-24 22:12:28,392 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:12:28,392 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir/data/version-2/snapshot.200000001 2017-01-24 22:12:28,393 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FileTxnSnapLog@281] - Snapshotting: 0x200000002 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2397521463065956250.junit.dir/data/version-2/snapshot.200000002 2017-01-24 22:12:28,396 [myid:2] - INFO [Thread-44:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:12:28,396 [myid:2] - INFO [Thread-44:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:42592 2017-01-24 22:12:28,398 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11234, status: SUCCESS 2017-01-24 22:12:28,398 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42592:LearnerHandler@287] - Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@b6804a5 2017-01-24 22:12:28,404 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42592:LearnerHandler@342] - Synchronizing with Follower sid: 1 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2017-01-24 22:12:28,404 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42592:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x300000000sent zxid of db as 0x200000002 2017-01-24 22:12:28,405 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Learner@329] - Getting a snapshot from leader 2017-01-24 22:12:28,411 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FileTxnSnapLog@281] - Snapshotting: 0x200000002 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2/snapshot.200000002 2017-01-24 22:12:28,413 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42592:LearnerHandler@477] - Received NEWLEADER-ACK message from 1 2017-01-24 22:12:28,413 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Leader@934] - Have quorum of supporters, sids: [ 1,2 ]; starting up and setting last processed zxid: 0x300000000 2017-01-24 22:12:28,508 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11230 2017-01-24 22:12:28,509 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52354 2017-01-24 22:12:28,509 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:52354 2017-01-24 22:12:28,509 [myid:1] - INFO [Thread-46:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:12:28,510 [myid:1] - INFO [Thread-46:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52354 (no session established for client) 2017-01-24 22:12:28,648 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:Leader@491] - Shutting down 2017-01-24 22:12:28,648 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:Leader@497] - Shutdown called java.lang.Exception: shutdown Leader! reason: Not sufficient followers synced, only synced with sids: [ 1 ] at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:497) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:472) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:864) 2017-01-24 22:12:28,648 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:28,648 [myid:1] - INFO [Thread-8:Leader$LearnerCnxAcceptor@318] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2017-01-24 22:12:28,648 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:SessionTrackerImpl@225] - Shutting down 2017-01-24 22:12:28,649 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:PrepRequestProcessor@761] - Shutting down 2017-01-24 22:12:28,649 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:ProposalRequestProcessor@88] - Shutting down 2017-01-24 22:12:28,649 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:CommitProcessor@181] - Shutting down 2017-01-24 22:12:28,649 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2017-01-24 22:12:28,649 [myid:1] - INFO [CommitProcessor:1:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:28,649 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:Leader$ToBeAppliedRequestProcessor@656] - Shutting down 2017-01-24 22:12:28,649 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:28,649 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:28,649 [myid:1] - INFO [SyncThread:1:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:28,650 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumPeer@781] - LOOKING 2017-01-24 22:12:28,650 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4158500655544466289.junit.dir/data/version-2/snapshot.0 2017-01-24 22:12:28,651 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x100000003 2017-01-24 22:12:28,652 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11223 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:28,652 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x100000003 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,768 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:28,768 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:28,769 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52356 2017-01-24 22:12:28,769 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52356, server: localhost/127.0.0.1:11230 2017-01-24 22:12:28,769 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:ZooKeeperServer@832] - Client attempting to renew session 0x159d440f0ed0000 at /127.0.0.1:52356 2017-01-24 22:12:28,769 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:Learner@107] - Revalidating client: 0x159d440f0ed0000 2017-01-24 22:12:28,772 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@595] - Established session 0x159d440f0ed0000 with negotiated timeout 30000 for client /127.0.0.1:52356 2017-01-24 22:12:28,772 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:11230, sessionid = 0x159d440f0ed0000, negotiated timeout = 30000 2017-01-24 22:12:28,774 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@119] - Got zxid 0x300000001 expected 0x1 2017-01-24 22:12:28,775 [myid:1] - INFO [SyncThread:1:FileTxnLog@199] - Creating new log file: log.300000001 2017-01-24 22:12:28,777 [myid:] - INFO [Thread-12:QuorumAuthUpgradeTest@229] - Restarting server myid=1 2017-01-24 22:12:28,777 [myid:] - INFO [Thread-12:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 2017-01-24 22:12:28,777 [myid:] - INFO [Thread-12:Leader@491] - Shutting down 2017-01-24 22:12:28,777 [myid:] - INFO [Thread-12:Leader@497] - Shutdown called java.lang.Exception: shutdown Leader! reason: quorum Peer shutdown at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:497) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:893) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.restartServer(QuorumAuthUpgradeTest.java:230) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade(QuorumAuthUpgradeTest.java:204) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) 2017-01-24 22:12:28,778 [myid:] - INFO [Thread-12:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:28,778 [myid:2] - INFO [Thread-44:Leader$LearnerCnxAcceptor@318] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2017-01-24 22:12:28,778 [myid:] - INFO [Thread-12:SessionTrackerImpl@225] - Shutting down 2017-01-24 22:12:28,778 [myid:] - INFO [Thread-12:PrepRequestProcessor@761] - Shutting down 2017-01-24 22:12:28,779 [myid:] - INFO [Thread-12:ProposalRequestProcessor@88] - Shutting down 2017-01-24 22:12:28,779 [myid:] - INFO [Thread-12:CommitProcessor@181] - Shutting down 2017-01-24 22:12:28,779 [myid:] - INFO [Thread-12:Leader$ToBeAppliedRequestProcessor@656] - Shutting down 2017-01-24 22:12:28,779 [myid:] - INFO [Thread-12:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:28,779 [myid:] - INFO [Thread-12:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:28,779 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2017-01-24 22:12:28,779 [myid:2] - INFO [SyncThread:2:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:28,779 [myid:2] - INFO [CommitProcessor:2:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:28,782 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@90] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:86) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:12:28,784 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:12:28,782 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42592:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:42592 ******** 2017-01-24 22:12:28,784 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42592:LearnerHandler@610] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:608) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:601) 2017-01-24 22:12:28,785 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:28,785 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52356 which had sessionid 0x159d440f0ed0000 2017-01-24 22:12:28,785 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:28,785 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:28,786 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:28,786 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:12:28,786 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:CommitProcessor@181] - Shutting down 2017-01-24 22:12:28,786 [myid:1] - INFO [FollowerRequestProcessor:1:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:12:28,786 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@867] - Unexpected exception java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:451) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:864) 2017-01-24 22:12:28,786 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Leader@491] - Shutting down 2017-01-24 22:12:28,786 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:28,788 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:28,786 [myid:2] - ERROR [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:12:28,788 [myid:] - INFO [Thread-12:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 2017-01-24 22:12:28,788 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:28,788 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:28,788 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:28,789 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:28,787 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:12:28,789 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:28,787 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:28,790 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:28,790 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:28,787 [myid:1] - INFO [CommitProcessor:1:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:12:28,787 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:28,791 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:28,787 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:12:28,787 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 0, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:28,791 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:28,791 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:12:28,788 [myid:] - INFO [Thread-12:QuorumBase@323] - Waiting for QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 to exit thread 2017-01-24 22:12:28,792 [myid:1] - INFO [SyncThread:1:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:12:28,792 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:28,792 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:28,793 [myid:] - INFO [Thread-12:QuorumPeerTestBase$MainThread@81] - id = 2 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6343054086158567637.junit.dir clientPort = 11233 2017-01-24 22:12:28,794 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@781] - LOOKING 2017-01-24 22:12:28,794 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2/snapshot.200000002 2017-01-24 22:12:28,795 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:28,795 [myid:] - INFO [Thread-12:ClientBase@246] - server 127.0.0.1:11233 not up java.net.ConnectException: Connection refused 2017-01-24 22:12:28,795 [myid:] - INFO [Thread-47:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6343054086158567637.junit.dir/zoo.cfg 2017-01-24 22:12:28,796 [myid:] - INFO [Thread-47:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:12:28,796 [myid:2] - INFO [Thread-47:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:12:28,796 [myid:2] - INFO [Thread-47:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:12:28,796 [myid:2] - INFO [Thread-47:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:12:28,797 [myid:2] - WARN [Thread-47:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:28,797 [myid:2] - INFO [Thread-47:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:12:28,797 [myid:2] - INFO [Thread-47:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11233 2017-01-24 22:12:28,798 [myid:2] - INFO [Thread-47:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:12:28,798 [myid:2] - INFO [Thread-47:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:12:28,798 [myid:2] - INFO [Thread-47:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:12:28,798 [myid:2] - INFO [Thread-47:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to false 2017-01-24 22:12:28,799 [myid:2] - INFO [Thread-47:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to true 2017-01-24 22:12:28,799 [myid:2] - INFO [Thread-47:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:12:28,799 [myid:2] - INFO [Thread-47:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:12:28,799 [myid:2] - INFO [Thread-47:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:12:28,799 [myid:2] - INFO [Thread-47:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:12:28,799 [myid:2] - INFO [Thread-47:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:12:28,800 [myid:2] - INFO [Thread-47:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:12:28,800 [myid:2] - INFO [Thread-47:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:28,801 [myid:2] - INFO [Thread-47:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:12:28,802 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x300000001 2017-01-24 22:12:28,802 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@559] - Cannot open channel to 2 at election address localhost/127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:12:28,803 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,803 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:28,803 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), FOLLOWING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,804 [myid:2] - INFO [Thread-48:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11235 2017-01-24 22:12:28,812 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@781] - LOOKING 2017-01-24 22:12:28,812 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FastLeaderElection@744] - New election. My id = 2, proposed zxid=0x0 2017-01-24 22:12:28,813 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51872 2017-01-24 22:12:28,821 [myid:1] - INFO [localhost/127.0.0.1:11232:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:45220 2017-01-24 22:12:28,821 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,826 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-1:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:12:28,828 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-2:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:12:28,835 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:12:28,836 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:51872 2017-01-24 22:12:28,837 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-1:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11229, status: SUCCESS 2017-01-24 22:12:28,838 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:12:28,838 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:45220 2017-01-24 22:12:28,839 [myid:2] - INFO [QuorumConnectionThread-[myid=2]-2:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11232, status: SUCCESS 2017-01-24 22:12:28,844 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:28,853 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11223 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:12:28,853 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@778] - Notification time out: 400 2017-01-24 22:12:28,854 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,854 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), FOLLOWING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,854 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:28,854 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,855 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,855 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), FOLLOWING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,855 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@861] - LEADING 2017-01-24 22:12:28,855 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6343054086158567637.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6343054086158567637.junit.dir/data/version-2 2017-01-24 22:12:28,855 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Leader@356] - LEADING - LEADER ELECTION TOOK - 43 2017-01-24 22:12:28,856 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6343054086158567637.junit.dir/data/version-2/snapshot.0 2017-01-24 22:12:28,856 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,858 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,859 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), LEADING (my state) 2017-01-24 22:12:28,859 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), LEADING (my state) 2017-01-24 22:12:28,859 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LEADING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,859 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LEADING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:28,859 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:28,860 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2 2017-01-24 22:12:28,860 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 66 2017-01-24 22:12:28,860 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:12:28,862 [myid:2] - INFO [Thread-49:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:12:28,863 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11234, status: SUCCESS 2017-01-24 22:12:28,864 [myid:2] - INFO [Thread-49:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:42601 2017-01-24 22:12:28,864 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42601:LearnerHandler@287] - Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@2cdbf42e 2017-01-24 22:12:28,867 [myid:2] - ERROR [LearnerHandler-/127.0.0.1:42601:LearnerHandler@585] - Unexpected exception causing shutdown while sock still open java.io.IOException: Follower is ahead of the leader at org.apache.zookeeper.server.quorum.Leader.waitForEpochAck(Leader.java:889) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:322) 2017-01-24 22:12:28,867 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@90] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:321) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:83) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:12:28,868 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:28,868 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:28,868 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:28,868 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@781] - LOOKING 2017-01-24 22:12:28,869 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2/snapshot.200000002 2017-01-24 22:12:28,870 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x300000001 2017-01-24 22:12:28,871 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,871 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:28,871 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42601:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:42601 ******** 2017-01-24 22:12:28,872 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), FOLLOWING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,871 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), LEADING (my state) 2017-01-24 22:12:28,872 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LEADING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:28,873 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:28,873 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2 2017-01-24 22:12:28,873 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 5 2017-01-24 22:12:28,874 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:12:28,875 [myid:2] - INFO [Thread-49:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:12:28,876 [myid:2] - INFO [Thread-49:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:42602 2017-01-24 22:12:28,876 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11234, status: SUCCESS 2017-01-24 22:12:28,877 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42602:LearnerHandler@287] - Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@2cdbf42e 2017-01-24 22:12:28,976 [myid:1] - INFO [localhost/127.0.0.1:11232:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:12:29,045 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:29,046 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45833 2017-01-24 22:12:29,046 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45833 2017-01-24 22:12:29,047 [myid:2] - INFO [Thread-50:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45833 (no session established for client) 2017-01-24 22:12:29,184 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:29,184 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:29,185 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45834 2017-01-24 22:12:29,185 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45834, server: localhost/127.0.0.1:11233 2017-01-24 22:12:29,185 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:29,185 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45834 (no session established for client) 2017-01-24 22:12:29,185 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:29,254 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11223 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:12:29,254 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@778] - Notification time out: 800 2017-01-24 22:12:29,297 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:29,298 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45836 2017-01-24 22:12:29,298 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45836 2017-01-24 22:12:29,299 [myid:2] - INFO [Thread-51:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45836 (no session established for client) 2017-01-24 22:12:29,363 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:29,365 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42607:LearnerHandler@287] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@76415e70 2017-01-24 22:12:29,366 [myid:2] - ERROR [LearnerHandler-/127.0.0.1:42607:LearnerHandler@585] - Unexpected exception causing shutdown while sock still open java.io.IOException: Follower is ahead of the leader at org.apache.zookeeper.server.quorum.Leader.waitForEpochAck(Leader.java:889) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:322) 2017-01-24 22:12:29,366 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42607:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:42607 ******** 2017-01-24 22:12:29,366 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@90] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:321) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:83) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:12:29,367 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:12:29,367 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:29,367 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:29,367 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@781] - LOOKING 2017-01-24 22:12:29,368 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2/snapshot.100000003 2017-01-24 22:12:29,369 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FastLeaderElection@744] - New election. My id = 0, proposed zxid=0x200000002 2017-01-24 22:12:29,370 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x200000002 (n.zxid), 0x4 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:29,370 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x200000002 (n.zxid), 0x4 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), LEADING (my state) 2017-01-24 22:12:29,370 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x200000002 (n.zxid), 0x4 (n.round), LOOKING (n.state), 0 (n.sid), 0x2 (n.peerEPoch), FOLLOWING (my state) 2017-01-24 22:12:29,371 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), LEADING (n.state), 2 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:29,371 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 2 (n.leader), 0x200000002 (n.zxid), 0x3 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x2 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:12:29,371 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@849] - FOLLOWING 2017-01-24 22:12:29,371 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6261823493417515862.junit.dir/data/version-2 2017-01-24 22:12:29,371 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 3 2017-01-24 22:12:29,372 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:12:29,372 [myid:2] - INFO [LearnerHandler-/127.0.0.1:42608:LearnerHandler@287] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@76415e70 2017-01-24 22:12:29,386 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:29,386 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:29,386 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42869 2017-01-24 22:12:29,386 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42869, server: localhost/127.0.0.1:11227 2017-01-24 22:12:29,387 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:29,387 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42869 (no session established for client) 2017-01-24 22:12:29,387 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:29,549 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:29,550 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45840 2017-01-24 22:12:29,550 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45840 2017-01-24 22:12:29,551 [myid:2] - INFO [Thread-52:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45840 (no session established for client) 2017-01-24 22:12:29,788 [myid:2] - INFO [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:12:29,801 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:29,801 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45841 2017-01-24 22:12:29,802 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45841 2017-01-24 22:12:29,802 [myid:2] - INFO [Thread-53:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45841 (no session established for client) 2017-01-24 22:12:30,052 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:30,053 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45842 2017-01-24 22:12:30,053 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45842 2017-01-24 22:12:30,054 [myid:2] - INFO [Thread-54:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45842 (no session established for client) 2017-01-24 22:12:30,055 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11223 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:12:30,055 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@778] - Notification time out: 1600 2017-01-24 22:12:30,304 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:30,305 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45844 2017-01-24 22:12:30,305 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45844 2017-01-24 22:12:30,306 [myid:2] - INFO [Thread-55:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45844 (no session established for client) 2017-01-24 22:12:30,502 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:12:30,502 [myid:1] - INFO [WorkerSender[myid=1]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:12:30,556 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:30,557 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45845 2017-01-24 22:12:30,557 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45845 2017-01-24 22:12:30,557 [myid:2] - INFO [Thread-56:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45845 (no session established for client) 2017-01-24 22:12:30,808 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:30,808 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45846 2017-01-24 22:12:30,809 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45846 2017-01-24 22:12:30,809 [myid:2] - INFO [Thread-57:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45846 (no session established for client) 2017-01-24 22:12:30,993 [myid:2] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:12:31,059 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:31,060 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45847 2017-01-24 22:12:31,060 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45847 2017-01-24 22:12:31,061 [myid:2] - INFO [Thread-58:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45847 (no session established for client) 2017-01-24 22:12:31,189 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:12:31,311 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:31,312 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45848 2017-01-24 22:12:31,312 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45848 2017-01-24 22:12:31,313 [myid:2] - INFO [Thread-59:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45848 (no session established for client) 2017-01-24 22:12:31,365 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:31,365 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:31,365 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52380 2017-01-24 22:12:31,365 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52380, server: localhost/127.0.0.1:11230 2017-01-24 22:12:31,366 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:31,366 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52380 (no session established for client) 2017-01-24 22:12:31,366 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:31,563 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:31,563 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45850 2017-01-24 22:12:31,564 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45850 2017-01-24 22:12:31,564 [myid:2] - INFO [Thread-60:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45850 (no session established for client) 2017-01-24 22:12:31,656 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11223 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:12:31,656 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@778] - Notification time out: 3200 2017-01-24 22:12:31,815 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:31,815 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45852 2017-01-24 22:12:31,815 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45852 2017-01-24 22:12:31,816 [myid:2] - INFO [Thread-61:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45852 (no session established for client) 2017-01-24 22:12:32,000 [myid:2] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2017-01-24 22:12:32,000 [myid:1] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2017-01-24 22:12:32,046 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:32,046 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:32,046 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45853 2017-01-24 22:12:32,046 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45853, server: localhost/127.0.0.1:11233 2017-01-24 22:12:32,047 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:32,047 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45853 (no session established for client) 2017-01-24 22:12:32,047 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:32,066 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:32,067 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45854 2017-01-24 22:12:32,067 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45854 2017-01-24 22:12:32,068 [myid:2] - INFO [Thread-62:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45854 (no session established for client) 2017-01-24 22:12:32,318 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:32,319 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45855 2017-01-24 22:12:32,319 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45855 2017-01-24 22:12:32,320 [myid:2] - INFO [Thread-63:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45855 (no session established for client) 2017-01-24 22:12:32,570 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:32,570 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45856 2017-01-24 22:12:32,571 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45856 2017-01-24 22:12:32,571 [myid:2] - INFO [Thread-64:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45856 (no session established for client) 2017-01-24 22:12:32,710 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:32,711 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:32,711 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42887 2017-01-24 22:12:32,711 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42887, server: localhost/127.0.0.1:11227 2017-01-24 22:12:32,711 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:32,712 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42887 (no session established for client) 2017-01-24 22:12:32,712 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:32,822 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:32,822 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45858 2017-01-24 22:12:32,822 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45858 2017-01-24 22:12:32,823 [myid:2] - INFO [Thread-65:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45858 (no session established for client) 2017-01-24 22:12:33,073 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:33,074 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45859 2017-01-24 22:12:33,074 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45859 2017-01-24 22:12:33,075 [myid:2] - INFO [Thread-66:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45859 (no session established for client) 2017-01-24 22:12:33,325 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:33,326 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45860 2017-01-24 22:12:33,326 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45860 2017-01-24 22:12:33,326 [myid:2] - INFO [Thread-67:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45860 (no session established for client) 2017-01-24 22:12:33,577 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:33,577 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45861 2017-01-24 22:12:33,578 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45861 2017-01-24 22:12:33,578 [myid:2] - INFO [Thread-68:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45861 (no session established for client) 2017-01-24 22:12:33,828 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:33,829 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45862 2017-01-24 22:12:33,829 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45862 2017-01-24 22:12:33,830 [myid:2] - INFO [Thread-69:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45862 (no session established for client) 2017-01-24 22:12:34,080 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:34,081 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45863 2017-01-24 22:12:34,081 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45863 2017-01-24 22:12:34,082 [myid:2] - INFO [Thread-70:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45863 (no session established for client) 2017-01-24 22:12:34,332 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:34,332 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45864 2017-01-24 22:12:34,333 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45864 2017-01-24 22:12:34,333 [myid:2] - INFO [Thread-71:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45864 (no session established for client) 2017-01-24 22:12:34,421 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:34,421 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:34,421 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52396 2017-01-24 22:12:34,421 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52396, server: localhost/127.0.0.1:11230 2017-01-24 22:12:34,422 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:34,422 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52396 (no session established for client) 2017-01-24 22:12:34,422 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:34,584 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:34,584 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45866 2017-01-24 22:12:34,584 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45866 2017-01-24 22:12:34,585 [myid:2] - INFO [Thread-72:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45866 (no session established for client) 2017-01-24 22:12:34,835 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:34,836 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45867 2017-01-24 22:12:34,836 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45867 2017-01-24 22:12:34,837 [myid:2] - INFO [Thread-73:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45867 (no session established for client) 2017-01-24 22:12:34,857 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11223 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:12:34,857 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@778] - Notification time out: 6400 2017-01-24 22:12:35,087 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:35,088 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45869 2017-01-24 22:12:35,088 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45869 2017-01-24 22:12:35,088 [myid:2] - INFO [Thread-74:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45869 (no session established for client) 2017-01-24 22:12:35,339 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:35,339 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45870 2017-01-24 22:12:35,340 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45870 2017-01-24 22:12:35,340 [myid:2] - INFO [Thread-75:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45870 (no session established for client) 2017-01-24 22:12:35,424 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:35,425 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:35,425 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45871 2017-01-24 22:12:35,425 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45871, server: localhost/127.0.0.1:11233 2017-01-24 22:12:35,425 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:35,426 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45871 (no session established for client) 2017-01-24 22:12:35,426 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:35,590 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:35,591 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45872 2017-01-24 22:12:35,591 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45872 2017-01-24 22:12:35,592 [myid:2] - INFO [Thread-76:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45872 (no session established for client) 2017-01-24 22:12:35,842 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:35,843 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45873 2017-01-24 22:12:35,843 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45873 2017-01-24 22:12:35,844 [myid:2] - INFO [Thread-77:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45873 (no session established for client) 2017-01-24 22:12:36,094 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:36,095 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45874 2017-01-24 22:12:36,095 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45874 2017-01-24 22:12:36,095 [myid:2] - INFO [Thread-78:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45874 (no session established for client) 2017-01-24 22:12:36,205 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:36,206 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:36,206 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42905 2017-01-24 22:12:36,206 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42905, server: localhost/127.0.0.1:11227 2017-01-24 22:12:36,207 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:36,207 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42905 (no session established for client) 2017-01-24 22:12:36,207 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:36,346 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:36,346 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45876 2017-01-24 22:12:36,347 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45876 2017-01-24 22:12:36,347 [myid:2] - INFO [Thread-79:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45876 (no session established for client) 2017-01-24 22:12:36,597 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:36,598 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45877 2017-01-24 22:12:36,598 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45877 2017-01-24 22:12:36,599 [myid:2] - INFO [Thread-80:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45877 (no session established for client) 2017-01-24 22:12:36,849 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:36,850 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45878 2017-01-24 22:12:36,850 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45878 2017-01-24 22:12:36,850 [myid:2] - INFO [Thread-81:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45878 (no session established for client) 2017-01-24 22:12:37,101 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:37,101 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45879 2017-01-24 22:12:37,102 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45879 2017-01-24 22:12:37,102 [myid:2] - INFO [Thread-82:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45879 (no session established for client) 2017-01-24 22:12:37,352 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:37,353 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45880 2017-01-24 22:12:37,353 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45880 2017-01-24 22:12:37,354 [myid:2] - INFO [Thread-83:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45880 (no session established for client) 2017-01-24 22:12:37,604 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:37,605 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45881 2017-01-24 22:12:37,605 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45881 2017-01-24 22:12:37,606 [myid:2] - INFO [Thread-84:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45881 (no session established for client) 2017-01-24 22:12:37,856 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:37,857 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45882 2017-01-24 22:12:37,857 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45882 2017-01-24 22:12:37,857 [myid:2] - INFO [Thread-85:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45882 (no session established for client) 2017-01-24 22:12:38,108 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:38,108 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45883 2017-01-24 22:12:38,109 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45883 2017-01-24 22:12:38,109 [myid:2] - INFO [Thread-86:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45883 (no session established for client) 2017-01-24 22:12:38,282 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:38,282 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:38,282 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52415 2017-01-24 22:12:38,282 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52415, server: localhost/127.0.0.1:11230 2017-01-24 22:12:38,283 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:38,283 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52415 (no session established for client) 2017-01-24 22:12:38,283 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:38,360 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:38,360 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45885 2017-01-24 22:12:38,360 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45885 2017-01-24 22:12:38,361 [myid:2] - INFO [Thread-87:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45885 (no session established for client) 2017-01-24 22:12:38,611 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:38,612 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45886 2017-01-24 22:12:38,612 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45886 2017-01-24 22:12:38,613 [myid:2] - INFO [Thread-88:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45886 (no session established for client) 2017-01-24 22:12:38,863 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:38,864 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45887 2017-01-24 22:12:38,864 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45887 2017-01-24 22:12:38,864 [myid:2] - INFO [Thread-89:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45887 (no session established for client) 2017-01-24 22:12:39,115 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:39,115 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45888 2017-01-24 22:12:39,116 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45888 2017-01-24 22:12:39,116 [myid:2] - INFO [Thread-90:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45888 (no session established for client) 2017-01-24 22:12:39,296 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:39,297 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:39,297 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45889 2017-01-24 22:12:39,297 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45889, server: localhost/127.0.0.1:11233 2017-01-24 22:12:39,298 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:39,298 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45889 (no session established for client) 2017-01-24 22:12:39,298 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:39,367 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:39,367 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45890 2017-01-24 22:12:39,367 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45890 2017-01-24 22:12:39,368 [myid:2] - INFO [Thread-91:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45890 (no session established for client) 2017-01-24 22:12:39,618 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:39,619 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45891 2017-01-24 22:12:39,619 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45891 2017-01-24 22:12:39,620 [myid:2] - INFO [Thread-92:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45891 (no session established for client) 2017-01-24 22:12:39,870 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:39,871 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45892 2017-01-24 22:12:39,871 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45892 2017-01-24 22:12:39,871 [myid:2] - INFO [Thread-93:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45892 (no session established for client) 2017-01-24 22:12:39,948 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:39,949 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:39,949 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42923 2017-01-24 22:12:39,949 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42923, server: localhost/127.0.0.1:11227 2017-01-24 22:12:39,949 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:39,950 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42923 (no session established for client) 2017-01-24 22:12:39,950 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:40,122 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:40,122 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45894 2017-01-24 22:12:40,123 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45894 2017-01-24 22:12:40,123 [myid:2] - INFO [Thread-94:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45894 (no session established for client) 2017-01-24 22:12:40,374 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:40,375 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45895 2017-01-24 22:12:40,375 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45895 2017-01-24 22:12:40,375 [myid:2] - INFO [Thread-95:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45895 (no session established for client) 2017-01-24 22:12:40,626 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:40,626 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45896 2017-01-24 22:12:40,627 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45896 2017-01-24 22:12:40,627 [myid:2] - INFO [Thread-96:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45896 (no session established for client) 2017-01-24 22:12:40,877 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:40,878 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45897 2017-01-24 22:12:40,878 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45897 2017-01-24 22:12:40,879 [myid:2] - INFO [Thread-97:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45897 (no session established for client) 2017-01-24 22:12:41,129 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:41,130 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45898 2017-01-24 22:12:41,130 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45898 2017-01-24 22:12:41,131 [myid:2] - INFO [Thread-98:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45898 (no session established for client) 2017-01-24 22:12:41,205 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:41,206 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:41,206 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52430 2017-01-24 22:12:41,206 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52430, server: localhost/127.0.0.1:11230 2017-01-24 22:12:41,207 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:41,207 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52430 (no session established for client) 2017-01-24 22:12:41,207 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:41,257 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11223 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:12:41,258 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@778] - Notification time out: 12800 2017-01-24 22:12:41,381 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:41,381 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45901 2017-01-24 22:12:41,382 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45901 2017-01-24 22:12:41,382 [myid:2] - INFO [Thread-99:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45901 (no session established for client) 2017-01-24 22:12:41,633 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:41,633 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45902 2017-01-24 22:12:41,633 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45902 2017-01-24 22:12:41,634 [myid:2] - INFO [Thread-100:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45902 (no session established for client) 2017-01-24 22:12:41,741 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:41,742 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:41,742 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45903 2017-01-24 22:12:41,742 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45903, server: localhost/127.0.0.1:11233 2017-01-24 22:12:41,742 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:41,742 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45903 (no session established for client) 2017-01-24 22:12:41,743 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:41,884 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:41,885 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45904 2017-01-24 22:12:41,885 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45904 2017-01-24 22:12:41,886 [myid:2] - INFO [Thread-101:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45904 (no session established for client) 2017-01-24 22:12:42,136 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:42,137 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45905 2017-01-24 22:12:42,137 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45905 2017-01-24 22:12:42,137 [myid:2] - INFO [Thread-102:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45905 (no session established for client) 2017-01-24 22:12:42,388 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:42,388 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45906 2017-01-24 22:12:42,389 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45906 2017-01-24 22:12:42,389 [myid:2] - INFO [Thread-103:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45906 (no session established for client) 2017-01-24 22:12:42,564 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:42,565 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:42,565 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42937 2017-01-24 22:12:42,565 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42937, server: localhost/127.0.0.1:11227 2017-01-24 22:12:42,565 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:42,566 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42937 (no session established for client) 2017-01-24 22:12:42,566 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:42,639 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:42,640 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45908 2017-01-24 22:12:42,640 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45908 2017-01-24 22:12:42,641 [myid:2] - INFO [Thread-104:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45908 (no session established for client) 2017-01-24 22:12:42,891 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:42,892 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45909 2017-01-24 22:12:42,892 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45909 2017-01-24 22:12:42,893 [myid:2] - INFO [Thread-105:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45909 (no session established for client) 2017-01-24 22:12:43,143 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:43,143 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45910 2017-01-24 22:12:43,144 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45910 2017-01-24 22:12:43,145 [myid:2] - INFO [Thread-106:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45910 (no session established for client) 2017-01-24 22:12:43,395 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:43,396 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45911 2017-01-24 22:12:43,396 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45911 2017-01-24 22:12:43,396 [myid:2] - INFO [Thread-107:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45911 (no session established for client) 2017-01-24 22:12:43,647 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:43,647 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45912 2017-01-24 22:12:43,647 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45912 2017-01-24 22:12:43,648 [myid:2] - INFO [Thread-108:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45912 (no session established for client) 2017-01-24 22:12:43,667 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:43,667 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:43,668 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52444 2017-01-24 22:12:43,668 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52444, server: localhost/127.0.0.1:11230 2017-01-24 22:12:43,668 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:43,668 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52444 (no session established for client) 2017-01-24 22:12:43,668 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:43,898 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:43,899 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45914 2017-01-24 22:12:43,899 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45914 2017-01-24 22:12:43,900 [myid:2] - INFO [Thread-109:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45914 (no session established for client) 2017-01-24 22:12:44,150 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:44,151 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45915 2017-01-24 22:12:44,151 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45915 2017-01-24 22:12:44,151 [myid:2] - INFO [Thread-110:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45915 (no session established for client) 2017-01-24 22:12:44,402 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:44,403 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45916 2017-01-24 22:12:44,403 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45916 2017-01-24 22:12:44,404 [myid:2] - INFO [Thread-111:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45916 (no session established for client) 2017-01-24 22:12:44,525 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:44,525 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:44,526 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45917 2017-01-24 22:12:44,526 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45917, server: localhost/127.0.0.1:11233 2017-01-24 22:12:44,526 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:44,526 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45917 (no session established for client) 2017-01-24 22:12:44,526 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:44,654 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:44,655 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45918 2017-01-24 22:12:44,655 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45918 2017-01-24 22:12:44,655 [myid:2] - INFO [Thread-112:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45918 (no session established for client) 2017-01-24 22:12:44,906 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:44,906 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45919 2017-01-24 22:12:44,907 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45919 2017-01-24 22:12:44,907 [myid:2] - INFO [Thread-113:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45919 (no session established for client) 2017-01-24 22:12:45,158 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:45,158 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45920 2017-01-24 22:12:45,158 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45920 2017-01-24 22:12:45,159 [myid:2] - INFO [Thread-114:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45920 (no session established for client) 2017-01-24 22:12:45,409 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:45,410 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45921 2017-01-24 22:12:45,410 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45921 2017-01-24 22:12:45,411 [myid:2] - INFO [Thread-115:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45921 (no session established for client) 2017-01-24 22:12:45,556 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:45,556 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:45,557 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42952 2017-01-24 22:12:45,557 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42952, server: localhost/127.0.0.1:11227 2017-01-24 22:12:45,557 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:45,557 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42952 (no session established for client) 2017-01-24 22:12:45,560 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:45,661 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:45,661 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45923 2017-01-24 22:12:45,662 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45923 2017-01-24 22:12:45,662 [myid:2] - INFO [Thread-116:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45923 (no session established for client) 2017-01-24 22:12:45,913 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:45,913 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45924 2017-01-24 22:12:45,913 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45924 2017-01-24 22:12:45,914 [myid:2] - INFO [Thread-117:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45924 (no session established for client) 2017-01-24 22:12:46,164 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:46,165 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45925 2017-01-24 22:12:46,165 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45925 2017-01-24 22:12:46,166 [myid:2] - INFO [Thread-118:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45925 (no session established for client) 2017-01-24 22:12:46,416 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:46,417 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45926 2017-01-24 22:12:46,417 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45926 2017-01-24 22:12:46,417 [myid:2] - INFO [Thread-119:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45926 (no session established for client) 2017-01-24 22:12:46,668 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:46,668 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45927 2017-01-24 22:12:46,669 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45927 2017-01-24 22:12:46,669 [myid:2] - INFO [Thread-120:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45927 (no session established for client) 2017-01-24 22:12:46,919 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:46,920 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45928 2017-01-24 22:12:46,920 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45928 2017-01-24 22:12:46,921 [myid:2] - INFO [Thread-121:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45928 (no session established for client) 2017-01-24 22:12:46,935 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:46,935 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:46,935 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52460 2017-01-24 22:12:46,935 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52460, server: localhost/127.0.0.1:11230 2017-01-24 22:12:46,936 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:46,936 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52460 (no session established for client) 2017-01-24 22:12:46,936 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:47,171 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:47,172 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45930 2017-01-24 22:12:47,172 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45930 2017-01-24 22:12:47,172 [myid:2] - INFO [Thread-122:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45930 (no session established for client) 2017-01-24 22:12:47,423 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:47,423 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45931 2017-01-24 22:12:47,424 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45931 2017-01-24 22:12:47,424 [myid:2] - INFO [Thread-123:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45931 (no session established for client) 2017-01-24 22:12:47,470 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:47,470 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:47,470 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45932 2017-01-24 22:12:47,470 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45932, server: localhost/127.0.0.1:11233 2017-01-24 22:12:47,477 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:47,477 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45932 (no session established for client) 2017-01-24 22:12:47,477 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:47,675 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:47,675 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45933 2017-01-24 22:12:47,675 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45933 2017-01-24 22:12:47,676 [myid:2] - INFO [Thread-124:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45933 (no session established for client) 2017-01-24 22:12:47,926 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:47,927 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45934 2017-01-24 22:12:47,927 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45934 2017-01-24 22:12:47,928 [myid:2] - INFO [Thread-125:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45934 (no session established for client) 2017-01-24 22:12:48,178 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:48,179 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45935 2017-01-24 22:12:48,179 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45935 2017-01-24 22:12:48,179 [myid:2] - INFO [Thread-126:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45935 (no session established for client) 2017-01-24 22:12:48,430 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:48,430 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45936 2017-01-24 22:12:48,430 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45936 2017-01-24 22:12:48,431 [myid:2] - INFO [Thread-127:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45936 (no session established for client) 2017-01-24 22:12:48,469 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:48,469 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:48,469 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42967 2017-01-24 22:12:48,469 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42967, server: localhost/127.0.0.1:11227 2017-01-24 22:12:48,470 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:48,470 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42967 (no session established for client) 2017-01-24 22:12:48,470 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:48,681 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:48,682 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45938 2017-01-24 22:12:48,682 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45938 2017-01-24 22:12:48,683 [myid:2] - INFO [Thread-128:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45938 (no session established for client) 2017-01-24 22:12:48,933 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:48,934 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45939 2017-01-24 22:12:48,934 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45939 2017-01-24 22:12:48,935 [myid:2] - INFO [Thread-129:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45939 (no session established for client) 2017-01-24 22:12:49,185 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:49,186 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45940 2017-01-24 22:12:49,186 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45940 2017-01-24 22:12:49,186 [myid:2] - INFO [Thread-130:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45940 (no session established for client) 2017-01-24 22:12:49,437 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:49,437 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45941 2017-01-24 22:12:49,437 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45941 2017-01-24 22:12:49,438 [myid:2] - INFO [Thread-131:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45941 (no session established for client) 2017-01-24 22:12:49,688 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:49,689 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45942 2017-01-24 22:12:49,689 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45942 2017-01-24 22:12:49,690 [myid:2] - INFO [Thread-132:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45942 (no session established for client) 2017-01-24 22:12:49,940 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:49,940 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45943 2017-01-24 22:12:49,941 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45943 2017-01-24 22:12:49,941 [myid:2] - INFO [Thread-133:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45943 (no session established for client) 2017-01-24 22:12:50,192 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:50,192 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45944 2017-01-24 22:12:50,192 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45944 2017-01-24 22:12:50,193 [myid:2] - INFO [Thread-134:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45944 (no session established for client) 2017-01-24 22:12:50,213 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:50,213 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:50,213 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52476 2017-01-24 22:12:50,213 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52476, server: localhost/127.0.0.1:11230 2017-01-24 22:12:50,214 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:50,214 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52476 (no session established for client) 2017-01-24 22:12:50,214 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:50,314 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:50,314 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:50,315 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45946 2017-01-24 22:12:50,315 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45946, server: localhost/127.0.0.1:11233 2017-01-24 22:12:50,315 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:50,315 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45946 (no session established for client) 2017-01-24 22:12:50,316 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:50,443 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:50,444 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45947 2017-01-24 22:12:50,444 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45947 2017-01-24 22:12:50,445 [myid:2] - INFO [Thread-135:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45947 (no session established for client) 2017-01-24 22:12:50,695 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:50,696 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45948 2017-01-24 22:12:50,696 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45948 2017-01-24 22:12:50,696 [myid:2] - INFO [Thread-136:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45948 (no session established for client) 2017-01-24 22:12:50,947 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:50,947 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45949 2017-01-24 22:12:50,947 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45949 2017-01-24 22:12:50,948 [myid:2] - INFO [Thread-137:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45949 (no session established for client) 2017-01-24 22:12:50,968 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:50,968 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:50,969 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42980 2017-01-24 22:12:50,969 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42980, server: localhost/127.0.0.1:11227 2017-01-24 22:12:50,969 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:50,969 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42980 (no session established for client) 2017-01-24 22:12:50,970 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:51,198 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:51,199 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45951 2017-01-24 22:12:51,199 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45951 2017-01-24 22:12:51,200 [myid:2] - INFO [Thread-138:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45951 (no session established for client) 2017-01-24 22:12:51,450 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:51,451 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45952 2017-01-24 22:12:51,451 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45952 2017-01-24 22:12:51,452 [myid:2] - INFO [Thread-139:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45952 (no session established for client) 2017-01-24 22:12:51,702 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:51,703 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45953 2017-01-24 22:12:51,703 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45953 2017-01-24 22:12:51,704 [myid:2] - INFO [Thread-140:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45953 (no session established for client) 2017-01-24 22:12:51,954 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:51,955 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45954 2017-01-24 22:12:51,955 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45954 2017-01-24 22:12:51,955 [myid:2] - INFO [Thread-141:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45954 (no session established for client) 2017-01-24 22:12:52,206 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:52,206 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45955 2017-01-24 22:12:52,207 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45955 2017-01-24 22:12:52,207 [myid:2] - INFO [Thread-142:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45955 (no session established for client) 2017-01-24 22:12:52,457 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:52,458 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45956 2017-01-24 22:12:52,459 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45956 2017-01-24 22:12:52,459 [myid:2] - INFO [Thread-143:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45956 (no session established for client) 2017-01-24 22:12:52,477 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:52,478 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:52,478 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52488 2017-01-24 22:12:52,478 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52488, server: localhost/127.0.0.1:11230 2017-01-24 22:12:52,478 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:52,478 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52488 (no session established for client) 2017-01-24 22:12:52,479 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:52,710 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:52,710 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45958 2017-01-24 22:12:52,711 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45958 2017-01-24 22:12:52,711 [myid:2] - INFO [Thread-144:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45958 (no session established for client) 2017-01-24 22:12:52,948 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:52,948 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:52,949 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45959 2017-01-24 22:12:52,949 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45959, server: localhost/127.0.0.1:11233 2017-01-24 22:12:52,949 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:52,949 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45959 (no session established for client) 2017-01-24 22:12:52,949 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:52,961 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:52,962 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45960 2017-01-24 22:12:52,962 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45960 2017-01-24 22:12:52,962 [myid:2] - INFO [Thread-145:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45960 (no session established for client) 2017-01-24 22:12:53,213 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:53,213 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45961 2017-01-24 22:12:53,214 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45961 2017-01-24 22:12:53,214 [myid:2] - INFO [Thread-146:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45961 (no session established for client) 2017-01-24 22:12:53,464 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:53,465 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45962 2017-01-24 22:12:53,465 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45962 2017-01-24 22:12:53,466 [myid:2] - INFO [Thread-147:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45962 (no session established for client) 2017-01-24 22:12:53,716 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:53,717 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45963 2017-01-24 22:12:53,717 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45963 2017-01-24 22:12:53,718 [myid:2] - INFO [Thread-148:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45963 (no session established for client) 2017-01-24 22:12:53,878 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:53,878 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:53,879 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:42994 2017-01-24 22:12:53,879 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:42994, server: localhost/127.0.0.1:11227 2017-01-24 22:12:53,879 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:53,879 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:42994 (no session established for client) 2017-01-24 22:12:53,879 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:53,968 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:53,968 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45965 2017-01-24 22:12:53,969 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45965 2017-01-24 22:12:53,969 [myid:2] - INFO [Thread-149:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45965 (no session established for client) 2017-01-24 22:12:54,058 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11223 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:12:54,059 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@778] - Notification time out: 25600 2017-01-24 22:12:54,220 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:54,220 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45967 2017-01-24 22:12:54,220 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45967 2017-01-24 22:12:54,221 [myid:2] - INFO [Thread-150:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45967 (no session established for client) 2017-01-24 22:12:54,471 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:54,472 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45968 2017-01-24 22:12:54,472 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45968 2017-01-24 22:12:54,473 [myid:2] - INFO [Thread-151:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45968 (no session established for client) 2017-01-24 22:12:54,723 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:54,723 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45969 2017-01-24 22:12:54,724 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45969 2017-01-24 22:12:54,724 [myid:2] - INFO [Thread-152:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45969 (no session established for client) 2017-01-24 22:12:54,975 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:54,975 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45970 2017-01-24 22:12:54,975 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45970 2017-01-24 22:12:54,976 [myid:2] - INFO [Thread-153:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45970 (no session established for client) 2017-01-24 22:12:55,110 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:55,110 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:55,111 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52502 2017-01-24 22:12:55,111 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52502, server: localhost/127.0.0.1:11230 2017-01-24 22:12:55,111 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:55,111 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52502 (no session established for client) 2017-01-24 22:12:55,111 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:55,226 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:55,227 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45972 2017-01-24 22:12:55,227 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45972 2017-01-24 22:12:55,229 [myid:2] - INFO [Thread-154:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45972 (no session established for client) 2017-01-24 22:12:55,479 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:55,480 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45973 2017-01-24 22:12:55,480 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45973 2017-01-24 22:12:55,481 [myid:2] - INFO [Thread-155:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45973 (no session established for client) 2017-01-24 22:12:55,718 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:55,718 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:55,718 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45974 2017-01-24 22:12:55,719 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45974, server: localhost/127.0.0.1:11233 2017-01-24 22:12:55,719 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:55,719 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45974 (no session established for client) 2017-01-24 22:12:55,719 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:55,731 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:55,731 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45975 2017-01-24 22:12:55,731 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45975 2017-01-24 22:12:55,732 [myid:2] - INFO [Thread-156:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45975 (no session established for client) 2017-01-24 22:12:55,982 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:55,983 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45976 2017-01-24 22:12:55,983 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45976 2017-01-24 22:12:55,984 [myid:2] - INFO [Thread-157:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45976 (no session established for client) 2017-01-24 22:12:56,115 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:56,115 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:56,115 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:43007 2017-01-24 22:12:56,115 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:43007, server: localhost/127.0.0.1:11227 2017-01-24 22:12:56,116 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:56,116 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:43007 (no session established for client) 2017-01-24 22:12:56,116 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:56,234 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:56,235 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45978 2017-01-24 22:12:56,235 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45978 2017-01-24 22:12:56,236 [myid:2] - INFO [Thread-158:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45978 (no session established for client) 2017-01-24 22:12:56,486 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:56,487 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45979 2017-01-24 22:12:56,487 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45979 2017-01-24 22:12:56,488 [myid:2] - INFO [Thread-159:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45979 (no session established for client) 2017-01-24 22:12:56,738 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:56,738 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45980 2017-01-24 22:12:56,739 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45980 2017-01-24 22:12:56,739 [myid:2] - INFO [Thread-160:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45980 (no session established for client) 2017-01-24 22:12:56,989 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:56,990 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45981 2017-01-24 22:12:56,990 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45981 2017-01-24 22:12:56,991 [myid:2] - INFO [Thread-161:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45981 (no session established for client) 2017-01-24 22:12:57,241 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:57,242 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45982 2017-01-24 22:12:57,242 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45982 2017-01-24 22:12:57,242 [myid:2] - INFO [Thread-162:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45982 (no session established for client) 2017-01-24 22:12:57,493 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:57,493 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45983 2017-01-24 22:12:57,494 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45983 2017-01-24 22:12:57,494 [myid:2] - INFO [Thread-163:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45983 (no session established for client) 2017-01-24 22:12:57,744 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:57,745 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45984 2017-01-24 22:12:57,745 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45984 2017-01-24 22:12:57,746 [myid:2] - INFO [Thread-164:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45984 (no session established for client) 2017-01-24 22:12:57,885 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:57,885 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:12:57,885 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52516 2017-01-24 22:12:57,885 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52516, server: localhost/127.0.0.1:11230 2017-01-24 22:12:57,886 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:57,886 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52516 (no session established for client) 2017-01-24 22:12:57,886 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:57,996 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:57,997 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45986 2017-01-24 22:12:57,997 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45986 2017-01-24 22:12:57,997 [myid:2] - INFO [Thread-165:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45986 (no session established for client) 2017-01-24 22:12:58,248 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:58,248 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45987 2017-01-24 22:12:58,249 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45987 2017-01-24 22:12:58,249 [myid:2] - INFO [Thread-166:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45987 (no session established for client) 2017-01-24 22:12:58,499 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:58,500 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45988 2017-01-24 22:12:58,500 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45988 2017-01-24 22:12:58,501 [myid:2] - INFO [Thread-167:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45988 (no session established for client) 2017-01-24 22:12:58,751 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:58,752 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45989 2017-01-24 22:12:58,752 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45989 2017-01-24 22:12:58,752 [myid:2] - INFO [Thread-168:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45989 (no session established for client) 2017-01-24 22:12:58,983 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:58,983 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:12:58,983 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45990 2017-01-24 22:12:58,983 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45990, server: localhost/127.0.0.1:11233 2017-01-24 22:12:58,984 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:12:58,984 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45990 (no session established for client) 2017-01-24 22:12:58,984 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:12:59,003 [myid:] - INFO [Thread-12:FourLetterWordMain@43] - connecting to 127.0.0.1 11233 2017-01-24 22:12:59,003 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45991 2017-01-24 22:12:59,003 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:45991 2017-01-24 22:12:59,004 [myid:2] - INFO [Thread-169:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45991 (no session established for client) 2017-01-24 22:12:59,004 [myid:] - INFO [Thread-12:JUnit4ZKTestRunner$LoggedInvokeMethod@62] - TEST METHOD FAILED testRollingUpgrade java.lang.AssertionError: waiting for server1being up at org.junit.Assert.fail(Assert.java:91) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.restartServer(QuorumAuthUpgradeTest.java:232) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade(QuorumAuthUpgradeTest.java:204) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) 2017-01-24 22:12:59,005 [myid:] - INFO [main:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227 2017-01-24 22:12:59,005 [myid:] - INFO [main:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:896) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdownAll(QuorumAuthTestBase.java:131) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.tearDown(QuorumAuthUpgradeTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2017-01-24 22:12:59,006 [myid:] - INFO [main:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:12:59,006 [myid:] - INFO [main:ZooKeeperServer@419] - shutting down 2017-01-24 22:12:59,006 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11227:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:12:59,007 [myid:0] - ERROR [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:12:59,007 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:59,007 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:59,008 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:59,008 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:59,007 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 2, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:59,008 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:59,008 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:59,009 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:59,008 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:12:59,009 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:12:59,008 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:59,010 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:59,009 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:59,010 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:12:59,011 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:59,011 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:12:59,011 [myid:] - INFO [main:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227 2017-01-24 22:12:59,011 [myid:] - INFO [main:QuorumBase@323] - Waiting for QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227 to exit thread 2017-01-24 22:12:59,371 [myid:0] - INFO [WorkerSender[myid=0]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:12:59,372 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:12:59,885 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:12:59,886 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:12:59,886 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1102] - Session 0x159d440f0ed0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-24 22:13:00,007 [myid:0] - INFO [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:13:01,546 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:01,546 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:13:01,547 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52524 2017-01-24 22:13:01,547 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52524, server: localhost/127.0.0.1:11230 2017-01-24 22:13:01,547 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:13:01,547 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52524 (no session established for client) 2017-01-24 22:13:01,547 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:13:01,960 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:01,960 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:13:01,961 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45994 2017-01-24 22:13:01,961 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45994, server: localhost/127.0.0.1:11233 2017-01-24 22:13:01,961 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:13:01,961 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45994 (no session established for client) 2017-01-24 22:13:01,961 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:13:03,003 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:03,003 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:13:03,004 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1102] - Session 0x159d440f0ed0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-24 22:13:04,254 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:04,254 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:13:04,254 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52527 2017-01-24 22:13:04,254 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52527, server: localhost/127.0.0.1:11230 2017-01-24 22:13:04,255 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:13:04,255 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52527 (no session established for client) 2017-01-24 22:13:04,255 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:13:04,947 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:04,947 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:13:04,947 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:45997 2017-01-24 22:13:04,947 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:45997, server: localhost/127.0.0.1:11233 2017-01-24 22:13:04,948 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:13:04,948 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:45997 (no session established for client) 2017-01-24 22:13:04,948 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:13:05,888 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:05,889 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:13:05,889 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1102] - Session 0x159d440f0ed0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-24 22:13:07,549 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:07,549 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:13:07,550 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52530 2017-01-24 22:13:07,550 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52530, server: localhost/127.0.0.1:11230 2017-01-24 22:13:07,550 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:13:07,550 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52530 (no session established for client) 2017-01-24 22:13:07,550 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:13:07,986 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:07,986 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:13:07,987 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:46000 2017-01-24 22:13:07,987 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:46000, server: localhost/127.0.0.1:11233 2017-01-24 22:13:07,987 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:13:07,987 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:46000 (no session established for client) 2017-01-24 22:13:07,987 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:13:08,583 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:08,583 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:13:08,583 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1102] - Session 0x159d440f0ed0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-24 22:13:08,866 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@867] - Unexpected exception java.lang.InterruptedException: Timeout while waiting for epoch to be acked by quorum at org.apache.zookeeper.server.quorum.Leader.waitForEpochAck(Leader.java:906) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:392) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:864) 2017-01-24 22:13:08,866 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Leader@491] - Shutting down 2017-01-24 22:13:08,866 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:Leader@497] - Shutdown called java.lang.Exception: shutdown Leader! reason: Forcing shutdown at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:497) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:870) 2017-01-24 22:13:08,867 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:ZooKeeperServer@419] - shutting down 2017-01-24 22:13:08,867 [myid:2] - INFO [Thread-49:Leader$LearnerCnxAcceptor@318] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2017-01-24 22:13:08,867 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@90] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:321) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:83) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:13:08,867 [myid:2] - ERROR [LearnerHandler-/127.0.0.1:42608:LearnerHandler@596] - Unexpected exception causing shutdown java.lang.InterruptedException at java.lang.Object.wait(Native Method) at org.apache.zookeeper.server.quorum.Leader.waitForEpochAck(Leader.java:902) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:322) 2017-01-24 22:13:08,868 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:13:08,868 [myid:2] - ERROR [LearnerHandler-/127.0.0.1:42602:LearnerHandler@596] - Unexpected exception causing shutdown java.lang.InterruptedException at java.lang.Object.wait(Native Method) at org.apache.zookeeper.server.quorum.Leader.waitForEpochAck(Leader.java:902) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:322) 2017-01-24 22:13:08,868 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@90] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:321) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:83) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:13:08,868 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@781] - LOOKING 2017-01-24 22:13:08,869 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:13:08,869 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:13:08,869 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:ZooKeeperServer@419] - shutting down 2017-01-24 22:13:08,870 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumPeer@781] - LOOKING 2017-01-24 22:13:08,868 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42602:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:42602 ******** 2017-01-24 22:13:08,870 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4772652757083079773.junit.dir/data/version-2/snapshot.200000002 2017-01-24 22:13:08,871 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x300000001 2017-01-24 22:13:08,868 [myid:2] - WARN [LearnerHandler-/127.0.0.1:42608:LearnerHandler@598] - ******* GOODBYE /127.0.0.1:42608 ******** 2017-01-24 22:13:08,868 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:13:08,872 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11229 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:08,872 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:ZooKeeperServer@419] - shutting down 2017-01-24 22:13:08,872 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11227:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:13:08,873 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:08,870 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FileSnap@83] - Reading snapshot /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6343054086158567637.junit.dir/data/version-2/snapshot.0 2017-01-24 22:13:08,874 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:08,875 [myid:] - INFO [main:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 2017-01-24 22:13:08,875 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11233:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:13:08,875 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:FastLeaderElection@744] - New election. My id = 2, proposed zxid=0x0 2017-01-24 22:13:08,876 [myid:2] - WARN [WorkerSender[myid=2]:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11229 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:08,877 [myid:2] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:13:08,876 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:13:08,877 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:08,877 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@829] - Unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:758) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:13:08,878 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:13:08,877 [myid:] - INFO [main:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 2017-01-24 22:13:08,878 [myid:] - INFO [main:QuorumBase@323] - Waiting for QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11233 to exit thread 2017-01-24 22:13:08,877 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:13:08,879 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:08,876 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@980] - Connection broken for id 2, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:13:08,879 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:08,876 [myid:2] - ERROR [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:13:08,880 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:13:08,880 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:08,880 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testRollingUpgrade java.lang.AssertionError: waiting for server1being up at org.junit.Assert.fail(Assert.java:91) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.restartServer(QuorumAuthUpgradeTest.java:232) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.testRollingUpgrade(QuorumAuthUpgradeTest.java:204) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) 2017-01-24 22:13:08,880 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testRollingUpgrade 2017-01-24 22:13:08,885 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testNullAuthLearnerServer 2017-01-24 22:13:08,885 [myid:] - INFO [Thread-170:JUnit4ZKTestRunner$LoggedInvokeMethod@50] - RUNNING TEST METHOD testNullAuthLearnerServer 2017-01-24 22:13:08,885 [myid:] - INFO [Thread-170:PortAssignment@32] - assigning port 11236 2017-01-24 22:13:08,886 [myid:] - INFO [Thread-170:PortAssignment@32] - assigning port 11237 2017-01-24 22:13:08,886 [myid:] - INFO [Thread-170:PortAssignment@32] - assigning port 11238 2017-01-24 22:13:08,886 [myid:] - INFO [Thread-170:PortAssignment@32] - assigning port 11239 2017-01-24 22:13:08,886 [myid:] - INFO [Thread-170:PortAssignment@32] - assigning port 11240 2017-01-24 22:13:08,886 [myid:] - INFO [Thread-170:PortAssignment@32] - assigning port 11241 2017-01-24 22:13:08,886 [myid:] - INFO [Thread-170:QuorumPeerTestBase$MainThread@81] - id = 0 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6384659498769109567.junit.dir clientPort = 11236 2017-01-24 22:13:08,887 [myid:] - INFO [Thread-171:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6384659498769109567.junit.dir/zoo.cfg 2017-01-24 22:13:08,887 [myid:] - INFO [Thread-170:QuorumPeerTestBase$MainThread@81] - id = 1 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test5470635973333042784.junit.dir clientPort = 11239 2017-01-24 22:13:08,887 [myid:] - WARN [Thread-171:QuorumPeerConfig@327] - No server failure will be tolerated. You need at least 3 servers. 2017-01-24 22:13:08,888 [myid:] - INFO [Thread-171:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:13:08,888 [myid:0] - INFO [Thread-171:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:13:08,888 [myid:0] - INFO [Thread-171:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:13:08,888 [myid:0] - INFO [Thread-171:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:13:08,888 [myid:0] - WARN [Thread-171:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:08,888 [myid:0] - INFO [Thread-171:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:13:08,889 [myid:] - INFO [Thread-170:FourLetterWordMain@43] - connecting to 127.0.0.1 11236 2017-01-24 22:13:08,897 [myid:] - INFO [Thread-170:ClientBase@246] - server 127.0.0.1:11236 not up java.net.ConnectException: Connection refused 2017-01-24 22:13:08,889 [myid:] - INFO [Thread-172:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test5470635973333042784.junit.dir/zoo.cfg 2017-01-24 22:13:08,897 [myid:] - WARN [Thread-172:QuorumPeerConfig@327] - No server failure will be tolerated. You need at least 3 servers. 2017-01-24 22:13:08,897 [myid:] - INFO [Thread-172:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:13:08,897 [myid:1] - INFO [Thread-172:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:13:08,898 [myid:1] - INFO [Thread-172:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:13:08,898 [myid:1] - INFO [Thread-172:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:13:08,898 [myid:1] - WARN [Thread-172:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:08,898 [myid:1] - INFO [Thread-172:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:13:08,899 [myid:1] - INFO [Thread-172:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11239 2017-01-24 22:13:08,899 [myid:1] - INFO [Thread-172:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:13:08,899 [myid:1] - INFO [Thread-172:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:13:08,900 [myid:1] - INFO [Thread-172:QuorumPeer@1277] - QuorumPeer communication is not secured! 2017-01-24 22:13:08,900 [myid:1] - INFO [Thread-172:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:13:08,900 [myid:1] - INFO [Thread-172:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:08,889 [myid:0] - INFO [Thread-171:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11236 2017-01-24 22:13:08,901 [myid:0] - INFO [Thread-171:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:13:08,901 [myid:0] - INFO [Thread-171:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:13:08,901 [myid:0] - INFO [Thread-171:QuorumPeer@1277] - QuorumPeer communication is not secured! 2017-01-24 22:13:08,901 [myid:0] - INFO [Thread-171:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:13:08,902 [myid:0] - INFO [Thread-171:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:08,903 [myid:1] - INFO [Thread-172:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:08,903 [myid:0] - INFO [Thread-171:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:08,905 [myid:1] - INFO [Thread-173:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11241 2017-01-24 22:13:08,909 [myid:0] - INFO [Thread-174:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11238 2017-01-24 22:13:08,910 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:QuorumPeer@781] - LOOKING 2017-01-24 22:13:08,910 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11239:QuorumPeer@781] - LOOKING 2017-01-24 22:13:08,910 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:FastLeaderElection@744] - New election. My id = 0, proposed zxid=0x0 2017-01-24 22:13:08,910 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11239:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x0 2017-01-24 22:13:08,911 [myid:0] - INFO [WorkerSender[myid=0]:QuorumCnxManager@331] - Have smaller server identifier, so dropping the connection: (1, 0) 2017-01-24 22:13:08,911 [myid:0] - INFO [localhost/127.0.0.1:11238:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:59130 2017-01-24 22:13:08,911 [myid:1] - INFO [localhost/127.0.0.1:11241:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:51284 2017-01-24 22:13:08,911 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:08,912 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:08,912 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@901] - Exception when using channel: for id 0 my id = 1 error = java.net.SocketException: Socket closed 2017-01-24 22:13:08,913 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:08,913 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:08,914 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@901] - Exception when using channel: for id 1 my id = 0 error = java.net.SocketException: Broken pipe 2017-01-24 22:13:08,915 [myid:0] - INFO [localhost/127.0.0.1:11238:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:59132 2017-01-24 22:13:08,915 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:08,915 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:08,917 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:08,917 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:08,918 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:08,921 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:09,074 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@778] - Notification time out: 400 2017-01-24 22:13:09,075 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11229 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:09,075 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x300000001 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:09,075 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@559] - Cannot open channel to 2 at election address localhost/127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:09,119 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:QuorumPeer@849] - FOLLOWING 2017-01-24 22:13:09,119 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6384659498769109567.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6384659498769109567.junit.dir/data/version-2 2017-01-24 22:13:09,119 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 209 2017-01-24 22:13:09,120 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:Learner@233] - Unexpected exception, tries=0, connecting to localhost/127.0.0.1:11240 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:225) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:72) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:851) 2017-01-24 22:13:09,122 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11239:QuorumPeer@861] - LEADING 2017-01-24 22:13:09,122 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11239:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test5470635973333042784.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test5470635973333042784.junit.dir/data/version-2 2017-01-24 22:13:09,122 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11239:Leader@356] - LEADING - LEADER ELECTION TOOK - 212 2017-01-24 22:13:09,123 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11239:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test5470635973333042784.junit.dir/data/version-2/snapshot.0 2017-01-24 22:13:09,147 [myid:] - INFO [Thread-170:FourLetterWordMain@43] - connecting to 127.0.0.1 11236 2017-01-24 22:13:09,147 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:40329 2017-01-24 22:13:09,147 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:40329 2017-01-24 22:13:09,148 [myid:0] - INFO [Thread-176:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:40329 (no session established for client) 2017-01-24 22:13:09,399 [myid:] - INFO [Thread-170:FourLetterWordMain@43] - connecting to 127.0.0.1 11236 2017-01-24 22:13:09,399 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:40330 2017-01-24 22:13:09,399 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:40330 2017-01-24 22:13:09,400 [myid:0] - INFO [Thread-177:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:40330 (no session established for client) 2017-01-24 22:13:09,476 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11229 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:13:09,477 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumCnxManager@559] - Cannot open channel to 2 at election address localhost/127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:13:09,477 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@778] - Notification time out: 800 2017-01-24 22:13:09,650 [myid:] - INFO [Thread-170:FourLetterWordMain@43] - connecting to 127.0.0.1 11236 2017-01-24 22:13:09,651 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:40333 2017-01-24 22:13:09,651 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:40333 2017-01-24 22:13:09,651 [myid:0] - INFO [Thread-178:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:40333 (no session established for client) 2017-01-24 22:13:09,880 [myid:2] - INFO [localhost/127.0.0.1:11235:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:13:09,902 [myid:] - INFO [Thread-170:FourLetterWordMain@43] - connecting to 127.0.0.1 11236 2017-01-24 22:13:09,902 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:40334 2017-01-24 22:13:09,903 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:40334 2017-01-24 22:13:09,903 [myid:0] - INFO [Thread-179:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:40334 (no session established for client) 2017-01-24 22:13:10,121 [myid:1] - INFO [LearnerHandler-/127.0.0.1:51982:LearnerHandler@287] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@36d2499a 2017-01-24 22:13:10,123 [myid:1] - INFO [LearnerHandler-/127.0.0.1:51982:LearnerHandler@342] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2017-01-24 22:13:10,124 [myid:1] - INFO [LearnerHandler-/127.0.0.1:51982:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x100000000sent zxid of db as 0x0 2017-01-24 22:13:10,124 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:Learner@329] - Getting a snapshot from leader 2017-01-24 22:13:10,125 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test6384659498769109567.junit.dir/data/version-2/snapshot.0 2017-01-24 22:13:10,126 [myid:1] - INFO [LearnerHandler-/127.0.0.1:51982:LearnerHandler@477] - Received NEWLEADER-ACK message from 0 2017-01-24 22:13:10,126 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11239:Leader@934] - Have quorum of supporters, sids: [ 0,1 ]; starting up and setting last processed zxid: 0x100000000 2017-01-24 22:13:10,153 [myid:] - INFO [Thread-170:FourLetterWordMain@43] - connecting to 127.0.0.1 11236 2017-01-24 22:13:10,154 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:40336 2017-01-24 22:13:10,154 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:40336 2017-01-24 22:13:10,155 [myid:0] - INFO [Thread-181:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:13:10,155 [myid:0] - INFO [Thread-181:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:40336 (no session established for client) 2017-01-24 22:13:10,155 [myid:] - INFO [Thread-170:FourLetterWordMain@43] - connecting to 127.0.0.1 11239 2017-01-24 22:13:10,156 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11239:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:35057 2017-01-24 22:13:10,156 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11239:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:35057 2017-01-24 22:13:10,156 [myid:1] - INFO [Thread-182:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:13:10,157 [myid:1] - INFO [Thread-182:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:35057 (no session established for client) 2017-01-24 22:13:10,157 [myid:] - INFO [Thread-170:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11236,127.0.0.1:11239 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@6538b0f9 2017-01-24 22:13:10,158 [myid:] - WARN [Thread-170-SendThread(localhost:11236):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:10,158 [myid:] - INFO [Thread-170-SendThread(localhost:11236):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11236 2017-01-24 22:13:10,159 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:40338 2017-01-24 22:13:10,159 [myid:] - INFO [Thread-170-SendThread(localhost:11236):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:40338, server: localhost/127.0.0.1:11236 2017-01-24 22:13:10,159 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:ZooKeeperServer@839] - Client attempting to establish new session at /127.0.0.1:40338 2017-01-24 22:13:10,160 [myid:1] - INFO [SyncThread:1:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:13:10,160 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:Follower@119] - Got zxid 0x100000001 expected 0x1 2017-01-24 22:13:10,160 [myid:0] - INFO [SyncThread:0:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:13:10,163 [myid:0] - INFO [CommitProcessor:0:ZooKeeperServer@595] - Established session 0x59d441a96d0000 with negotiated timeout 30000 for client /127.0.0.1:40338 2017-01-24 22:13:10,163 [myid:] - INFO [Thread-170-SendThread(localhost:11236):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:11236, sessionid = 0x59d441a96d0000, negotiated timeout = 30000 2017-01-24 22:13:10,167 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x59d441a96d0000 2017-01-24 22:13:10,168 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:40338 which had sessionid 0x59d441a96d0000 2017-01-24 22:13:10,169 [myid:] - INFO [Thread-170:ZooKeeper@684] - Session: 0x59d441a96d0000 closed 2017-01-24 22:13:10,169 [myid:] - INFO [Thread-170-EventThread:ClientCnxn$EventThread@512] - EventThread shut down 2017-01-24 22:13:10,169 [myid:] - INFO [Thread-170:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 27198 2017-01-24 22:13:10,169 [myid:] - INFO [Thread-170:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 53 2017-01-24 22:13:10,169 [myid:] - INFO [Thread-170:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testNullAuthLearnerServer 2017-01-24 22:13:10,169 [myid:] - INFO [main:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236 2017-01-24 22:13:10,169 [myid:] - INFO [main:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:896) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdownAll(QuorumAuthTestBase.java:131) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.tearDown(QuorumAuthUpgradeTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2017-01-24 22:13:10,170 [myid:] - INFO [main:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:13:10,170 [myid:] - INFO [main:ZooKeeperServer@419] - shutting down 2017-01-24 22:13:10,170 [myid:] - INFO [main:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:13:10,170 [myid:] - INFO [main:CommitProcessor@181] - Shutting down 2017-01-24 22:13:10,170 [myid:] - INFO [main:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:13:10,170 [myid:0] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:13:10,170 [myid:0] - INFO [CommitProcessor:0:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:13:10,171 [myid:] - INFO [main:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:13:10,171 [myid:0] - INFO [SyncThread:0:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:13:10,172 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:13:10,172 [myid:0] - ERROR [localhost/127.0.0.1:11238:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:13:10,173 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:13:10,173 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:13:10,173 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:10,173 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:13:10,174 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:10,174 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:13:10,174 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:10,175 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:10,175 [myid:] - INFO [main:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236 2017-01-24 22:13:10,175 [myid:] - INFO [main:QuorumBase@323] - Waiting for QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236 to exit thread 2017-01-24 22:13:10,230 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:10,231 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:13:10,231 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52552 2017-01-24 22:13:10,231 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52552, server: localhost/127.0.0.1:11230 2017-01-24 22:13:10,231 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:13:10,231 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52552 (no session established for client) 2017-01-24 22:13:10,232 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:13:10,277 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11229 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:13:10,278 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumCnxManager@559] - Cannot open channel to 2 at election address localhost/127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:13:10,278 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@778] - Notification time out: 1600 2017-01-24 22:13:10,350 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:10,350 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:13:10,351 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1102] - Session 0x159d440f0ed0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-24 22:13:10,685 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:10,685 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:13:10,685 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1102] - Session 0x159d440f0ed0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-24 22:13:11,173 [myid:0] - INFO [localhost/127.0.0.1:11238:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:13:11,873 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:13:11,875 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:11,875 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:13:11,875 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52557 2017-01-24 22:13:11,876 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52557, server: localhost/127.0.0.1:11230 2017-01-24 22:13:11,876 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:13:11,876 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52557 (no session established for client) 2017-01-24 22:13:11,876 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:13:11,879 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11229 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:13:11,879 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumCnxManager@559] - Cannot open channel to 2 at election address localhost/127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:13:11,879 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@778] - Notification time out: 3200 2017-01-24 22:13:11,918 [myid:0] - INFO [WorkerSender[myid=0]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:13:11,919 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:13:12,132 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:13:12,132 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:13:12,132 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:ZooKeeperServer@419] - shutting down 2017-01-24 22:13:12,133 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:13:12,133 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:CommitProcessor@181] - Shutting down 2017-01-24 22:13:12,133 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:13:12,133 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:13:12,133 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11236:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:13:12,135 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testNullAuthLearnerServer 2017-01-24 22:13:12,135 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testNullAuthLearnerServer 2017-01-24 22:13:12,136 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testAuthLearnerAgainstNullAuthServer 2017-01-24 22:13:12,136 [myid:] - INFO [Thread-183:JUnit4ZKTestRunner$LoggedInvokeMethod@50] - RUNNING TEST METHOD testAuthLearnerAgainstNullAuthServer 2017-01-24 22:13:12,136 [myid:] - INFO [Thread-183:PortAssignment@32] - assigning port 11242 2017-01-24 22:13:12,136 [myid:] - INFO [Thread-183:PortAssignment@32] - assigning port 11243 2017-01-24 22:13:12,136 [myid:] - INFO [Thread-183:PortAssignment@32] - assigning port 11244 2017-01-24 22:13:12,137 [myid:] - INFO [Thread-183:PortAssignment@32] - assigning port 11245 2017-01-24 22:13:12,137 [myid:] - INFO [Thread-183:PortAssignment@32] - assigning port 11246 2017-01-24 22:13:12,137 [myid:] - INFO [Thread-183:PortAssignment@32] - assigning port 11247 2017-01-24 22:13:12,137 [myid:] - INFO [Thread-183:QuorumPeerTestBase$MainThread@81] - id = 0 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test7361117950796815905.junit.dir clientPort = 11242 2017-01-24 22:13:12,138 [myid:] - INFO [Thread-184:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test7361117950796815905.junit.dir/zoo.cfg 2017-01-24 22:13:12,138 [myid:] - INFO [Thread-183:QuorumPeerTestBase$MainThread@81] - id = 1 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test296771033193329447.junit.dir clientPort = 11245 2017-01-24 22:13:12,138 [myid:] - WARN [Thread-184:QuorumPeerConfig@327] - No server failure will be tolerated. You need at least 3 servers. 2017-01-24 22:13:12,138 [myid:] - INFO [Thread-184:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:13:12,138 [myid:] - INFO [Thread-183:FourLetterWordMain@43] - connecting to 127.0.0.1 11242 2017-01-24 22:13:12,138 [myid:0] - INFO [Thread-184:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:13:12,139 [myid:0] - INFO [Thread-184:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:13:12,139 [myid:0] - INFO [Thread-184:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:13:12,139 [myid:] - INFO [Thread-183:ClientBase@246] - server 127.0.0.1:11242 not up java.net.ConnectException: Connection refused 2017-01-24 22:13:12,139 [myid:0] - WARN [Thread-184:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:12,139 [myid:0] - INFO [Thread-184:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:13:12,139 [myid:] - INFO [Thread-185:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test296771033193329447.junit.dir/zoo.cfg 2017-01-24 22:13:12,140 [myid:0] - INFO [Thread-184:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11242 2017-01-24 22:13:12,140 [myid:] - WARN [Thread-185:QuorumPeerConfig@327] - No server failure will be tolerated. You need at least 3 servers. 2017-01-24 22:13:12,140 [myid:] - INFO [Thread-185:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:13:12,140 [myid:0] - INFO [Thread-184:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:13:12,140 [myid:1] - INFO [Thread-185:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:13:12,140 [myid:1] - INFO [Thread-185:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:13:12,141 [myid:1] - INFO [Thread-185:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:13:12,140 [myid:0] - INFO [Thread-184:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:13:12,141 [myid:1] - WARN [Thread-185:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:12,141 [myid:1] - INFO [Thread-185:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:13:12,141 [myid:0] - INFO [Thread-184:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:13:12,141 [myid:0] - INFO [Thread-184:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to false 2017-01-24 22:13:12,141 [myid:0] - INFO [Thread-184:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to false 2017-01-24 22:13:12,141 [myid:1] - INFO [Thread-185:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11245 2017-01-24 22:13:12,142 [myid:0] - INFO [Thread-184:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:13:12,142 [myid:0] - INFO [Thread-184:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:13:12,142 [myid:0] - INFO [Thread-184:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:13:12,142 [myid:1] - INFO [Thread-185:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:13:12,142 [myid:1] - INFO [Thread-185:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:13:12,142 [myid:0] - INFO [Thread-184:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:13:12,142 [myid:1] - INFO [Thread-185:QuorumPeer@1277] - QuorumPeer communication is not secured! 2017-01-24 22:13:12,142 [myid:1] - INFO [Thread-185:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:13:12,143 [myid:1] - INFO [Thread-185:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:12,144 [myid:1] - INFO [Thread-185:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:12,146 [myid:1] - INFO [Thread-186:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11247 2017-01-24 22:13:12,146 [myid:0] - INFO [Thread-184:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:13:12,146 [myid:0] - INFO [Thread-184:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:13:12,147 [myid:0] - INFO [Thread-184:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:12,148 [myid:0] - INFO [Thread-184:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:12,151 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11245:QuorumPeer@781] - LOOKING 2017-01-24 22:13:12,152 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11245:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x0 2017-01-24 22:13:12,152 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11244 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:514) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:12,152 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:12,153 [myid:0] - INFO [Thread-187:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11244 2017-01-24 22:13:12,158 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:QuorumPeer@781] - LOOKING 2017-01-24 22:13:12,158 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:FastLeaderElection@744] - New election. My id = 0, proposed zxid=0x0 2017-01-24 22:13:12,158 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:12,159 [myid:1] - INFO [localhost/127.0.0.1:11247:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:38550 2017-01-24 22:13:12,159 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:13:12,160 [myid:0] - INFO [localhost/127.0.0.1:11244:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:53591 2017-01-24 22:13:12,160 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:QuorumCnxManager@331] - Have smaller server identifier, so dropping the connection: (1, 0) 2017-01-24 22:13:12,167 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:12,167 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:12,167 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:12,168 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:12,368 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:QuorumPeer@849] - FOLLOWING 2017-01-24 22:13:12,368 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11245:QuorumPeer@861] - LEADING 2017-01-24 22:13:12,368 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test7361117950796815905.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test7361117950796815905.junit.dir/data/version-2 2017-01-24 22:13:12,368 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11245:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test296771033193329447.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test296771033193329447.junit.dir/data/version-2 2017-01-24 22:13:12,368 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 210 2017-01-24 22:13:12,368 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11245:Leader@356] - LEADING - LEADER ELECTION TOOK - 216 2017-01-24 22:13:12,369 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:SaslQuorumAuthLearner@79] - Skipping SASL authentication as quorum.auth.learnerRequireSasl=false 2017-01-24 22:13:12,369 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11245:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test296771033193329447.junit.dir/data/version-2/snapshot.0 2017-01-24 22:13:12,371 [myid:1] - INFO [LearnerHandler-/127.0.0.1:56735:LearnerHandler@287] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@68eefca4 2017-01-24 22:13:12,373 [myid:1] - INFO [LearnerHandler-/127.0.0.1:56735:LearnerHandler@342] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2017-01-24 22:13:12,373 [myid:1] - INFO [LearnerHandler-/127.0.0.1:56735:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x100000000sent zxid of db as 0x0 2017-01-24 22:13:12,373 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:Learner@329] - Getting a snapshot from leader 2017-01-24 22:13:12,374 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test7361117950796815905.junit.dir/data/version-2/snapshot.0 2017-01-24 22:13:12,375 [myid:1] - INFO [LearnerHandler-/127.0.0.1:56735:LearnerHandler@477] - Received NEWLEADER-ACK message from 0 2017-01-24 22:13:12,375 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11245:Leader@934] - Have quorum of supporters, sids: [ 0,1 ]; starting up and setting last processed zxid: 0x100000000 2017-01-24 22:13:12,389 [myid:] - INFO [Thread-183:FourLetterWordMain@43] - connecting to 127.0.0.1 11242 2017-01-24 22:13:12,389 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11242:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:43163 2017-01-24 22:13:12,390 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11242:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:43163 2017-01-24 22:13:12,390 [myid:0] - INFO [Thread-190:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:13:12,391 [myid:0] - INFO [Thread-190:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:43163 (no session established for client) 2017-01-24 22:13:12,391 [myid:] - INFO [Thread-183:FourLetterWordMain@43] - connecting to 127.0.0.1 11245 2017-01-24 22:13:12,391 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11245:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:34938 2017-01-24 22:13:12,392 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11245:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:34938 2017-01-24 22:13:12,392 [myid:1] - INFO [Thread-191:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:13:12,393 [myid:1] - INFO [Thread-191:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:34938 (no session established for client) 2017-01-24 22:13:12,393 [myid:] - INFO [Thread-183:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11242,127.0.0.1:11245 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@494b584c 2017-01-24 22:13:12,394 [myid:] - WARN [Thread-183-SendThread(localhost:11245):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:12,394 [myid:] - INFO [Thread-183-SendThread(localhost:11245):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11245 2017-01-24 22:13:12,394 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11245:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:34939 2017-01-24 22:13:12,394 [myid:] - INFO [Thread-183-SendThread(localhost:11245):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:34939, server: localhost/127.0.0.1:11245 2017-01-24 22:13:12,395 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11245:ZooKeeperServer@839] - Client attempting to establish new session at /127.0.0.1:34939 2017-01-24 22:13:12,395 [myid:1] - INFO [SyncThread:1:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:13:12,396 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:Follower@119] - Got zxid 0x100000001 expected 0x1 2017-01-24 22:13:12,396 [myid:0] - INFO [SyncThread:0:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:13:12,398 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@595] - Established session 0x159d441b2380000 with negotiated timeout 30000 for client /127.0.0.1:34939 2017-01-24 22:13:12,398 [myid:] - INFO [Thread-183-SendThread(localhost:11245):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:11245, sessionid = 0x159d441b2380000, negotiated timeout = 30000 2017-01-24 22:13:12,405 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x159d441b2380000 2017-01-24 22:13:12,408 [myid:] - INFO [Thread-183:ZooKeeper@684] - Session: 0x159d441b2380000 closed 2017-01-24 22:13:12,408 [myid:] - INFO [Thread-183-EventThread:ClientCnxn$EventThread@512] - EventThread shut down 2017-01-24 22:13:12,408 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11245:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:34939 which had sessionid 0x159d441b2380000 2017-01-24 22:13:12,408 [myid:] - INFO [Thread-183:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 30080 2017-01-24 22:13:12,409 [myid:] - INFO [Thread-183:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 67 2017-01-24 22:13:12,409 [myid:] - INFO [Thread-183:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testAuthLearnerAgainstNullAuthServer 2017-01-24 22:13:12,409 [myid:] - INFO [main:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242 2017-01-24 22:13:12,409 [myid:] - INFO [main:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:896) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdownAll(QuorumAuthTestBase.java:131) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.tearDown(QuorumAuthUpgradeTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2017-01-24 22:13:12,410 [myid:] - INFO [main:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:13:12,410 [myid:] - INFO [main:ZooKeeperServer@419] - shutting down 2017-01-24 22:13:12,410 [myid:] - INFO [main:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:13:12,410 [myid:] - INFO [main:CommitProcessor@181] - Shutting down 2017-01-24 22:13:12,410 [myid:] - INFO [main:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:13:12,410 [myid:0] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:13:12,410 [myid:0] - INFO [CommitProcessor:0:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:13:12,411 [myid:] - INFO [main:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:13:12,411 [myid:0] - INFO [SyncThread:0:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:13:12,411 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11242:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:13:12,412 [myid:0] - ERROR [localhost/127.0.0.1:11244:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:13:12,412 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:13:12,412 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:13:12,413 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:13:12,414 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:12,412 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:12,414 [myid:] - INFO [main:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242 2017-01-24 22:13:12,413 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:12,415 [myid:] - INFO [main:QuorumBase@323] - Waiting for QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242 to exit thread 2017-01-24 22:13:12,414 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:13:12,415 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:12,656 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:12,656 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:13:12,656 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1102] - Session 0x159d440f0ed0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-24 22:13:13,412 [myid:0] - INFO [localhost/127.0.0.1:11244:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:13:13,603 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:13,603 [myid:] - INFO [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11227 2017-01-24 22:13:13,604 [myid:] - WARN [Thread-12-SendThread(localhost:11227):ClientCnxn$SendThread@1102] - Session 0x159d440f0ed0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-24 22:13:14,382 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:13:14,382 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:13:14,382 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:ZooKeeperServer@419] - shutting down 2017-01-24 22:13:14,382 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:13:14,382 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:CommitProcessor@181] - Shutting down 2017-01-24 22:13:14,382 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:13:14,383 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:13:14,383 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11242:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:13:14,384 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testAuthLearnerAgainstNullAuthServer 2017-01-24 22:13:14,384 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testAuthLearnerAgainstNullAuthServer 2017-01-24 22:13:14,385 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testAuthLearnerServer 2017-01-24 22:13:14,385 [myid:] - INFO [Thread-192:JUnit4ZKTestRunner$LoggedInvokeMethod@50] - RUNNING TEST METHOD testAuthLearnerServer 2017-01-24 22:13:14,385 [myid:] - INFO [Thread-192:PortAssignment@32] - assigning port 11248 2017-01-24 22:13:14,386 [myid:] - INFO [Thread-192:PortAssignment@32] - assigning port 11249 2017-01-24 22:13:14,386 [myid:] - INFO [Thread-192:PortAssignment@32] - assigning port 11250 2017-01-24 22:13:14,386 [myid:] - INFO [Thread-192:PortAssignment@32] - assigning port 11251 2017-01-24 22:13:14,386 [myid:] - INFO [Thread-192:PortAssignment@32] - assigning port 11252 2017-01-24 22:13:14,386 [myid:] - INFO [Thread-192:PortAssignment@32] - assigning port 11253 2017-01-24 22:13:14,386 [myid:] - INFO [Thread-192:QuorumPeerTestBase$MainThread@81] - id = 0 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4413362313903870178.junit.dir clientPort = 11248 2017-01-24 22:13:14,387 [myid:] - INFO [Thread-193:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4413362313903870178.junit.dir/zoo.cfg 2017-01-24 22:13:14,387 [myid:] - INFO [Thread-192:QuorumPeerTestBase$MainThread@81] - id = 1 tmpDir = /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8693106920335238483.junit.dir clientPort = 11251 2017-01-24 22:13:14,387 [myid:] - WARN [Thread-193:QuorumPeerConfig@327] - No server failure will be tolerated. You need at least 3 servers. 2017-01-24 22:13:14,388 [myid:] - INFO [Thread-193:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:13:14,388 [myid:] - INFO [Thread-192:FourLetterWordMain@43] - connecting to 127.0.0.1 11248 2017-01-24 22:13:14,388 [myid:0] - INFO [Thread-193:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:13:14,388 [myid:0] - INFO [Thread-193:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:13:14,388 [myid:0] - INFO [Thread-193:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:13:14,388 [myid:] - INFO [Thread-192:ClientBase@246] - server 127.0.0.1:11248 not up java.net.ConnectException: Connection refused 2017-01-24 22:13:14,388 [myid:0] - WARN [Thread-193:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:14,389 [myid:0] - INFO [Thread-193:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:13:14,388 [myid:] - INFO [Thread-194:QuorumPeerConfig@111] - Reading configuration from: /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8693106920335238483.junit.dir/zoo.cfg 2017-01-24 22:13:14,389 [myid:0] - INFO [Thread-193:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11248 2017-01-24 22:13:14,389 [myid:] - WARN [Thread-194:QuorumPeerConfig@327] - No server failure will be tolerated. You need at least 3 servers. 2017-01-24 22:13:14,389 [myid:] - INFO [Thread-194:QuorumPeerConfig@374] - Defaulting to majority quorums 2017-01-24 22:13:14,390 [myid:1] - INFO [Thread-194:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2017-01-24 22:13:14,390 [myid:1] - INFO [Thread-194:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2017-01-24 22:13:14,390 [myid:1] - INFO [Thread-194:DatadirCleanupManager@101] - Purge task is not scheduled. 2017-01-24 22:13:14,390 [myid:0] - INFO [Thread-193:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:13:14,390 [myid:0] - INFO [Thread-193:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:13:14,390 [myid:1] - WARN [Thread-194:QuorumPeerMain@129] - Unable to register log4j JMX control javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:127) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:140) at java.lang.Thread.run(Thread.java:745) 2017-01-24 22:13:14,391 [myid:1] - INFO [Thread-194:QuorumPeerMain@132] - Starting quorum peer 2017-01-24 22:13:14,391 [myid:0] - INFO [Thread-193:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:13:14,391 [myid:0] - INFO [Thread-193:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to true 2017-01-24 22:13:14,391 [myid:0] - INFO [Thread-193:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to true 2017-01-24 22:13:14,391 [myid:1] - INFO [Thread-194:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11251 2017-01-24 22:13:14,391 [myid:0] - INFO [Thread-193:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:13:14,391 [myid:0] - INFO [Thread-193:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:13:14,392 [myid:0] - INFO [Thread-193:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:13:14,392 [myid:1] - INFO [Thread-194:QuorumPeer@1048] - minSessionTimeout set to -1 2017-01-24 22:13:14,392 [myid:1] - INFO [Thread-194:QuorumPeer@1059] - maxSessionTimeout set to -1 2017-01-24 22:13:14,392 [myid:0] - INFO [Thread-193:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:13:14,392 [myid:1] - INFO [Thread-194:QuorumPeer@1279] - quorum.auth.enableSasl set to true 2017-01-24 22:13:14,392 [myid:1] - INFO [Thread-194:QuorumPeer@1264] - quorum.auth.serverRequireSasl set to true 2017-01-24 22:13:14,392 [myid:1] - INFO [Thread-194:QuorumPeer@1270] - quorum.auth.learnerRequireSasl set to true 2017-01-24 22:13:14,392 [myid:1] - INFO [Thread-194:QuorumPeer@1286] - quorum.auth.kerberos.servicePrincipal set to zkquorum/localhost 2017-01-24 22:13:14,392 [myid:1] - INFO [Thread-194:QuorumPeer@1298] - quorum.auth.server.saslLoginContext set to QuorumServer 2017-01-24 22:13:14,393 [myid:1] - INFO [Thread-194:QuorumPeer@1292] - quorum.auth.learner.saslLoginContext set to QuorumLearner 2017-01-24 22:13:14,393 [myid:1] - INFO [Thread-194:QuorumPeer@1306] - quorum.cnxn.threads.size set to 20 2017-01-24 22:13:14,393 [myid:0] - INFO [Thread-193:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:13:14,397 [myid:1] - INFO [Thread-194:Login@294] - QuorumServer successfully logged in. 2017-01-24 22:13:14,397 [myid:0] - INFO [Thread-193:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:13:14,397 [myid:1] - INFO [Thread-194:Login@294] - QuorumLearner successfully logged in. 2017-01-24 22:13:14,397 [myid:0] - INFO [Thread-193:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:14,398 [myid:1] - INFO [Thread-194:QuorumPeer@540] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:14,398 [myid:0] - INFO [Thread-193:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:14,399 [myid:1] - INFO [Thread-194:QuorumPeer@555] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2017-01-24 22:13:14,405 [myid:0] - INFO [Thread-195:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11250 2017-01-24 22:13:14,405 [myid:1] - INFO [Thread-196:QuorumCnxManager$Listener@691] - My election bind port: 0.0.0.0/0.0.0.0:11253 2017-01-24 22:13:14,406 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11251:QuorumPeer@781] - LOOKING 2017-01-24 22:13:14,406 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:QuorumPeer@781] - LOOKING 2017-01-24 22:13:14,407 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11251:FastLeaderElection@744] - New election. My id = 1, proposed zxid=0x0 2017-01-24 22:13:14,407 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:FastLeaderElection@744] - New election. My id = 0, proposed zxid=0x0 2017-01-24 22:13:14,407 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:14,407 [myid:0] - INFO [localhost/127.0.0.1:11250:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:60553 2017-01-24 22:13:14,408 [myid:1] - INFO [localhost/127.0.0.1:11253:QuorumCnxManager$Listener@698] - Received connection request /127.0.0.1:59854 2017-01-24 22:13:14,408 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:14,411 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:13:14,412 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:13:14,415 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-2:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:13:14,415 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-2:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:13:14,416 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-2:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:59854 2017-01-24 22:13:14,416 [myid:1] - INFO [QuorumConnectionThread-[myid=1]-1:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11250, status: SUCCESS 2017-01-24 22:13:14,416 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-2:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:60553 2017-01-24 22:13:14,416 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11253, status: SUCCESS 2017-01-24 22:13:14,417 [myid:0] - INFO [QuorumConnectionThread-[myid=0]-1:QuorumCnxManager@331] - Have smaller server identifier, so dropping the connection: (1, 0) 2017-01-24 22:13:14,427 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:14,428 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:14,428 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:14,428 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@542] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state) 2017-01-24 22:13:14,628 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:QuorumPeer@849] - FOLLOWING 2017-01-24 22:13:14,629 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4413362313903870178.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4413362313903870178.junit.dir/data/version-2 2017-01-24 22:13:14,629 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11251:QuorumPeer@861] - LEADING 2017-01-24 22:13:14,629 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:Follower@64] - FOLLOWING - LEADER ELECTION TOOK - 222 2017-01-24 22:13:14,629 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11251:ZooKeeperServer@162] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8693106920335238483.junit.dir/data/version-2 snapdir /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8693106920335238483.junit.dir/data/version-2 2017-01-24 22:13:14,629 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11251:Leader@356] - LEADING - LEADER ELECTION TOOK - 222 2017-01-24 22:13:14,630 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:SecurityUtils@68] - QuorumLearner will use DIGEST-MD5 as SASL mechanism. 2017-01-24 22:13:14,630 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11251:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test8693106920335238483.junit.dir/data/version-2/snapshot.0 2017-01-24 22:13:14,632 [myid:1] - INFO [Thread-197:SaslQuorumServerCallbackHandler@143] - Successfully authenticated learner: authenticationID=test; authorizationID=test. 2017-01-24 22:13:14,633 [myid:1] - INFO [Thread-197:SaslQuorumAuthServer@114] - Successfully completed the authentication using SASL. learner addr: /127.0.0.1:36656 2017-01-24 22:13:14,634 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:SaslQuorumAuthLearner@151] - Successfully completed the authentication using SASL. server addr: localhost/127.0.0.1:11252, status: SUCCESS 2017-01-24 22:13:14,634 [myid:1] - INFO [LearnerHandler-/127.0.0.1:36656:LearnerHandler@287] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@5b8ec669 2017-01-24 22:13:14,636 [myid:1] - INFO [LearnerHandler-/127.0.0.1:36656:LearnerHandler@342] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2017-01-24 22:13:14,636 [myid:1] - INFO [LearnerHandler-/127.0.0.1:36656:LearnerHandler@441] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x100000000sent zxid of db as 0x0 2017-01-24 22:13:14,636 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:Learner@329] - Getting a snapshot from leader 2017-01-24 22:13:14,638 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:FileTxnSnapLog@281] - Snapshotting: 0x0 to /data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test4413362313903870178.junit.dir/data/version-2/snapshot.0 2017-01-24 22:13:14,638 [myid:] - INFO [Thread-192:FourLetterWordMain@43] - connecting to 127.0.0.1 11248 2017-01-24 22:13:14,639 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11248:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52527 2017-01-24 22:13:14,639 [myid:1] - INFO [LearnerHandler-/127.0.0.1:36656:LearnerHandler@477] - Received NEWLEADER-ACK message from 0 2017-01-24 22:13:14,639 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11248:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:52527 2017-01-24 22:13:14,643 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11251:Leader@934] - Have quorum of supporters, sids: [ 0,1 ]; starting up and setting last processed zxid: 0x100000000 2017-01-24 22:13:14,647 [myid:0] - INFO [Thread-199:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52527 (no session established for client) 2017-01-24 22:13:14,898 [myid:] - INFO [Thread-192:FourLetterWordMain@43] - connecting to 127.0.0.1 11248 2017-01-24 22:13:14,898 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11248:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52528 2017-01-24 22:13:14,899 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11248:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:52528 2017-01-24 22:13:14,899 [myid:0] - INFO [Thread-200:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:13:14,900 [myid:0] - INFO [Thread-200:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52528 (no session established for client) 2017-01-24 22:13:14,900 [myid:] - INFO [Thread-192:FourLetterWordMain@43] - connecting to 127.0.0.1 11251 2017-01-24 22:13:14,900 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11251:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:59828 2017-01-24 22:13:14,901 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11251:NIOServerCnxn@821] - Processing stat command from /127.0.0.1:59828 2017-01-24 22:13:14,901 [myid:1] - INFO [Thread-201:NIOServerCnxn$StatCommand@655] - Stat command output 2017-01-24 22:13:14,902 [myid:1] - INFO [Thread-201:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:59828 (no session established for client) 2017-01-24 22:13:14,902 [myid:] - INFO [Thread-192:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11248,127.0.0.1:11251 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@76e543fa 2017-01-24 22:13:14,903 [myid:] - WARN [Thread-192-SendThread(localhost:11251):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:14,903 [myid:] - INFO [Thread-192-SendThread(localhost:11251):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11251 2017-01-24 22:13:14,903 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11251:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:59829 2017-01-24 22:13:14,903 [myid:] - INFO [Thread-192-SendThread(localhost:11251):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:59829, server: localhost/127.0.0.1:11251 2017-01-24 22:13:14,904 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11251:ZooKeeperServer@839] - Client attempting to establish new session at /127.0.0.1:59829 2017-01-24 22:13:14,904 [myid:1] - INFO [SyncThread:1:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:13:14,904 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:Follower@119] - Got zxid 0x100000001 expected 0x1 2017-01-24 22:13:14,905 [myid:0] - INFO [SyncThread:0:FileTxnLog@199] - Creating new log file: log.100000001 2017-01-24 22:13:14,907 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@595] - Established session 0x159d441bb130000 with negotiated timeout 30000 for client /127.0.0.1:59829 2017-01-24 22:13:14,907 [myid:] - INFO [Thread-192-SendThread(localhost:11251):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:11251, sessionid = 0x159d441bb130000, negotiated timeout = 30000 2017-01-24 22:13:14,910 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x159d441bb130000 2017-01-24 22:13:14,912 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11251:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:59829 which had sessionid 0x159d441bb130000 2017-01-24 22:13:14,912 [myid:] - INFO [Thread-192:ZooKeeper@684] - Session: 0x159d441bb130000 closed 2017-01-24 22:13:14,912 [myid:] - INFO [Thread-192:JUnit4ZKTestRunner$LoggedInvokeMethod@55] - Memory used 33263 2017-01-24 22:13:14,912 [myid:] - INFO [Thread-192:JUnit4ZKTestRunner$LoggedInvokeMethod@60] - Number of threads 85 2017-01-24 22:13:14,912 [myid:] - INFO [Thread-192:JUnit4ZKTestRunner$LoggedInvokeMethod@65] - FINISHED TEST METHOD testAuthLearnerServer 2017-01-24 22:13:14,913 [myid:] - INFO [main:QuorumBase@314] - Shutting down quorum peer QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248 2017-01-24 22:13:14,913 [myid:] - INFO [Thread-192-EventThread:ClientCnxn$EventThread@512] - EventThread shut down 2017-01-24 22:13:14,913 [myid:] - INFO [main:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:896) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:315) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:59) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:152) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdown(QuorumAuthTestBase.java:138) at org.apache.zookeeper.server.quorum.auth.QuorumAuthTestBase.shutdownAll(QuorumAuthTestBase.java:131) at org.apache.zookeeper.server.quorum.auth.QuorumAuthUpgradeTest.tearDown(QuorumAuthUpgradeTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2017-01-24 22:13:14,913 [myid:] - INFO [main:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:13:14,913 [myid:] - INFO [main:ZooKeeperServer@419] - shutting down 2017-01-24 22:13:14,914 [myid:] - INFO [main:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:13:14,914 [myid:] - INFO [main:CommitProcessor@181] - Shutting down 2017-01-24 22:13:14,914 [myid:] - INFO [main:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:13:14,914 [myid:0] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2017-01-24 22:13:14,914 [myid:0] - INFO [CommitProcessor:0:CommitProcessor@150] - CommitProcessor exited loop! 2017-01-24 22:13:14,914 [myid:] - INFO [main:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:13:14,915 [myid:0] - INFO [SyncThread:0:SyncRequestProcessor@155] - SyncRequestProcessor exited! 2017-01-24 22:13:14,915 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11248:NIOServerCnxnFactory@224] - NIOServerCnxn factory exited run method 2017-01-24 22:13:14,916 [myid:0] - ERROR [localhost/127.0.0.1:11250:QuorumCnxManager$Listener@715] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:696) 2017-01-24 22:13:14,917 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@980] - Connection broken for id 0, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:13:14,917 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@980] - Connection broken for id 1, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:965) 2017-01-24 22:13:14,917 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:14,917 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:13:14,917 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@983] - Interrupting SendWorker 2017-01-24 22:13:14,918 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@896] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1049) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:73) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:884) 2017-01-24 22:13:14,918 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:14,919 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@906] - Send worker leaving thread 2017-01-24 22:13:14,920 [myid:] - INFO [main:QuorumBase@318] - Shutting down leader election QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248 2017-01-24 22:13:14,920 [myid:] - INFO [main:QuorumBase@323] - Waiting for QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248 to exit thread 2017-01-24 22:13:15,080 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumCnxManager@559] - Cannot open channel to 0 at election address localhost/127.0.0.1:11229 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:13:15,080 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:QuorumCnxManager@559] - Cannot open channel to 2 at election address localhost/127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:538) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:579) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:769) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:827) 2017-01-24 22:13:15,081 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11230:FastLeaderElection@778] - Notification time out: 6400 2017-01-24 22:13:15,167 [myid:0] - INFO [WorkerSender[myid=0]:FastLeaderElection$Messenger$WorkerSender@370] - WorkerSender is down 2017-01-24 22:13:15,167 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection$Messenger$WorkerReceiver@340] - WorkerReceiver is down 2017-01-24 22:13:15,664 [myid:] - WARN [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:15,664 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11230 2017-01-24 22:13:15,664 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:52580 2017-01-24 22:13:15,664 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@852] - Socket connection established, initiating session, client: /127.0.0.1:52580, server: localhost/127.0.0.1:11230 2017-01-24 22:13:15,665 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2017-01-24 22:13:15,665 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11230:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:52580 (no session established for client) 2017-01-24 22:13:15,665 [myid:] - INFO [Thread-12-SendThread(localhost:11230):ClientCnxn$SendThread@1098] - Unable to read additional data from server sessionid 0x159d440f0ed0000, likely server has closed socket, closing socket connection and attempting reconnect 2017-01-24 22:13:15,917 [myid:0] - INFO [localhost/127.0.0.1:11250:QuorumCnxManager$Listener@728] - Leaving listener 2017-01-24 22:13:16,602 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@957] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/jenkins/workspace/CDH5-ZooKeeper-3.4.5-JDK7/build/test/tmp/test2999887027410032136.junit.dir/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2017-01-24 22:13:16,602 [myid:] - INFO [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:11233 2017-01-24 22:13:16,602 [myid:] - WARN [Thread-12-SendThread(localhost:11233):ClientCnxn$SendThread@1102] - Session 0x159d440f0ed0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2017-01-24 22:13:16,651 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:Follower@167] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:855) 2017-01-24 22:13:16,651 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:FollowerZooKeeperServer@139] - Shutting down 2017-01-24 22:13:16,651 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:ZooKeeperServer@419] - shutting down 2017-01-24 22:13:16,651 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:FollowerRequestProcessor@105] - Shutting down 2017-01-24 22:13:16,651 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:CommitProcessor@181] - Shutting down 2017-01-24 22:13:16,652 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:FinalRequestProcessor@415] - shutdown of request processor complete 2017-01-24 22:13:16,652 [myid:0] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:SyncRequestProcessor@175] - Shutting down 2017-01-24 22:13:16,652 [myid:0] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11248:QuorumPeer@879] - QuorumPeer main thread exited 2017-01-24 22:13:16,653 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testAuthLearnerServer 2017-01-24 22:13:16,654 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testAuthLearnerServer {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 21 weeks ago | 0|i399sn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2678 | Large databases take a long time to regain a quorum |
Bug | Closed | Major | Fixed | Robert Joseph Evans | Robert Joseph Evans | Robert Joseph Evans | 26/Jan/17 10:16 | 15/Jul/17 01:13 | 14/Feb/17 13:06 | 3.4.9, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | server | 1 | 12 | ZOOKEEPER-2845, ZOOKEEPER-1674 | I know this is long but please here me out. I recently inherited a massive zookeeper ensemble. The snapshot is 3.4 GB on disk. Because of its massive size we have been running into a number of issues. There are lots of problems that we hope to fix with tuning GC etc, but the big one right now that is blocking us making a lot of progress on the rest of them is that when we lose a quorum because the leader left, for what ever reason, it can take well over 5 mins for a new quorum to be established. So we cannot tune the leader without risking downtime. We traced down where the time was being spent and found that each server was clearing the database so it would be read back in again before leader election even started. Then as part of the sync phase each server will write out a snapshot to checkpoint the progress it made as part of the sync. I will be putting up a patch shortly with some proposed changes in it. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 48 weeks, 2 days ago | 0|i3990v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2677 | Verify the occurrence of CancelledKeyException in zookeeper branch-3.5 and above |
Task | Open | Major | Unresolved | Michael Han | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 26/Jan/17 01:20 | 31/Jan/17 11:41 | 3.5.3, 3.6.0 | 0 | 1 | ZOOKEEPER-2044 | Red Hat Enterprise Linux Server release 6.2 | As per the [discussion|https://issues.apache.org/jira/browse/ZOOKEEPER-2044?focusedCommentId=15836893&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15836893] in ZOOKEEPER-2044 jira, it need to analyse the chance of {{CancelledKeyException}} and fix(if any) in branch-3.5 and master. This jira task can be used for this. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 8 weeks ago | 0|i3989j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2676 | Enable remote debugging unit tests on CLI |
Improvement | Resolved | Minor | Won't Fix | Unassigned | Edward Ribeiro | Edward Ribeiro | 25/Jan/17 07:20 | 06/Nov/19 13:52 | 06/Nov/19 13:52 | 1 | 1 | Sometimes it's useful to be able to run a unit test from the CLI and then attach an IDE to enable debugging as below: {code} $ ant -Dtestcase=FourLetterWordsTest -DremoteDebug=true test-core-java {code} The unit test will stop as below: {code} (...) junit.run-concurrent: [echo] Running 1 concurrent JUnit processes. [junit] Listening for transport dt_socket at address: 5005 {code} And we will be able to put breakpoints on the target class and bind the IDE to it's process to step through the test. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 8 weeks, 1 day ago | 0|i396mf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2675 | Bump Mockito to version 2 |
Improvement | Open | Trivial | Unresolved | Unassigned | Edward Ribeiro | Edward Ribeiro | 25/Jan/17 06:06 | 09/Dec/19 08:07 | 0 | 2 | Current mockito version is 1.8.2, but version 2 brings new improvements while keeping backwards compatibility to jdk6 (branch-3.4) and support partially jdk8. So, this issue is to bring mockito version up to date. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 14 weeks, 3 days ago | 0|i396gn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2674 | Projet-Final |
Bug | Resolved | Trivial | Invalid | Unassigned | berkani | berkani | 24/Jan/17 09:33 | 31/Jan/17 11:38 | 31/Jan/17 11:38 | 0 | 2 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 7 weeks, 2 days ago | 0|i394q7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2673 | projet |
Bug | Resolved | Trivial | Invalid | Unassigned | berkani | berkani | 24/Jan/17 08:27 | 31/Jan/17 11:38 | 31/Jan/17 11:38 | 0 | 1 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 7 weeks, 2 days ago | 0|i394nr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2672 | Remove CHANGE.txt |
Improvement | Closed | Major | Fixed | Michael Han | Michael Han | Michael Han | 23/Jan/17 13:26 | 31/Mar/17 05:01 | 09/Feb/17 22:08 | 3.4.9, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | build | 0 | 7 | The CHANGE.txt is already not the source of truth of what's changed after we migrating to git - most of the git commits in recent couple of months don't update CHANGE.txt. The option of updating CHANGE.txt during commit flow automatically is none trivial, and do that manually is cumbersome and error prone. The consensus is we would rely on source control revision logs instead of CHANGE.txt moving forward; see https://www.mail-archive.com/dev@zookeeper.apache.org/msg37108.html for more details. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 5 weeks, 3 days ago | 0|i3937r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2671 | Fix compilation error in branch-3.4 |
Bug | Closed | Major | Fixed | Rakesh Radhakrishnan | Mohammad Arshad | Mohammad Arshad | 22/Jan/17 22:59 | 31/Mar/17 05:01 | 23/Jan/17 01:55 | 3.4.10 | server | 0 | 5 | ZOOKEEPER-2574 | branch-3.4 code compilation is failing. Following are the compilation erros: {code} compile-test: [mkdir] Created dir: D:\gitHome\zookeeperTrunk\build\test\classes [javac] Compiling 146 source files to D:\gitHome\zookeeperTrunk\build\test\classes [javac] warning: [options] bootstrap class path not set in conjunction with -source 1.6 [javac] D:\gitHome\zookeeperTrunk\src\java\test\org\apache\zookeeper\server\PurgeTxnTest.java:464: error: cannot find symbol [javac] ZooKeeper zk = ClientBase.createZKClient(HOSTPORT); [javac] ^ [javac] symbol: method createZKClient(String) [javac] location: class ClientBase [javac] D:\gitHome\zookeeperTrunk\src\java\test\org\apache\zookeeper\server\PurgeTxnTest.java:503: error: cannot find symbol [javac] zk = ClientBase.createZKClient(HOSTPORT); [javac] ^ [javac] symbol: method createZKClient(String) [javac] location: class ClientBase [javac] Note: Some input files use or override a deprecated API. {code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 8 weeks, 3 days ago | 0|i3924f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2670 | CLONE - Connections fo ZooKeeper server becomes slow over time with native GSSAPI |
Bug | Resolved | Major | Duplicate | Enis Soztutar | Yan Fitterer | Yan Fitterer | 19/Jan/17 14:32 | 09/Feb/17 12:52 | 09/Feb/17 12:52 | 3.4.6, 3.4.7, 3.4.8, 3.5.0 | 3.4.6, 3.4.7, 3.4.8, 3.5.2 | server | 0 | 4 | ZOOKEEPER-2230 | OS: RHEL6 Java: 1.8.0_40 Configuration: java.env: {noformat} SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Xmx5120m" SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Djava.security.auth.login.config=/local/apps/zookeeper-test1/conf/jaas-server.conf" SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Dsun.security.jgss.native=true" {noformat} jaas-server.conf: {noformat} Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true isInitiator=false principal="zookeeper/<hostname>@<REALM>"; }; {noformat} Process environment: {noformat} KRB5_KTNAME=/local/apps/zookeeper-test1/conf/keytab ZOO_LOG_DIR=/local/apps/zookeeper-test1/log ZOOCFGDIR=/local/apps/zookeeper-test1/conf {noformat} |
ZooKeeper server becomes slow over time when native GSSAPI is used. The connection to the server starts taking upto 10 seconds. This is happening with ZooKeeper-3.4.6 and is fairly reproducible. Debug logs: {noformat} 2015-07-02 00:58:49,318 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:NIOServerCnxnFactory@197] - Accepted socket connection from /<client_ip>:47942 2015-07-02 00:58:49,318 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@78] - serviceHostname is '<zookeeper-server>' 2015-07-02 00:58:49,318 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@79] - servicePrincipalName is 'zookeeper' 2015-07-02 00:58:49,318 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@80] - SASL mechanism(mech) is 'GSSAPI' 2015-07-02 00:58:49,324 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@106] - Added private credential to subject: [GSSCredential: zookeeper@<zookeeper-server> 1.2.840.113554.1.2.2 Accept [class sun.security.jgss.wrapper.GSSCredElement]] 2015-07-02 00:58:59,441 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@810] - Session establishment request from client /<client_ip>:47942 client's lastZxid is 0x0 2015-07-02 00:58:59,441 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@868] - Client attempting to establish new session at /<client_ip>:47942 2015-07-02 00:58:59,448 [myid:] - DEBUG [SyncThread:0:FinalRequestProcessor@88] - Processing request:: sessionid:0x14e486028785c81 type:createSession cxid:0x0 zxid:0x110e79 txntype:-10 reqpath:n/a 2015-07-02 00:58:59,448 [myid:] - DEBUG [SyncThread:0:FinalRequestProcessor@160] - sessionid:0x14e486028785c81 type:createSession cxid:0x0 zxid:0x110e79 txntype:-10 reqpath:n/a 2015-07-02 00:58:59,448 [myid:] - INFO [SyncThread:0:ZooKeeperServer@617] - Established session 0x14e486028785c81 with negotiated timeout 10000 for client /<client_ip>:47942 2015-07-02 00:58:59,452 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@949] - Responding to client SASL token. 2015-07-02 00:58:59,452 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@953] - Size of client SASL token: 706 2015-07-02 00:58:59,460 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@984] - Size of server SASL response: 161 2015-07-02 00:58:59,462 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@949] - Responding to client SASL token. 2015-07-02 00:58:59,462 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@953] - Size of client SASL token: 0 2015-07-02 00:58:59,462 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@984] - Size of server SASL response: 32 2015-07-02 00:58:59,463 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@949] - Responding to client SASL token. 2015-07-02 00:58:59,463 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@953] - Size of client SASL token: 32 2015-07-02 00:58:59,464 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:SaslServerCallbackHandler@118] - Successfully authenticated client: authenticationID=<user_principal>; authorizationID=<user_principal>. 2015-07-02 00:58:59,464 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@964] - adding SASL authorization for authorizationID: <user_principal> 2015-07-02 00:58:59,465 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14e486028785c81 2015-07-02 00:58:59,467 [myid:] - DEBUG [SyncThread:0:FinalRequestProcessor@88] - Processing request:: sessionid:0x14e486028785c81 type:closeSession cxid:0x1 zxid:0x110e7a txntype:-11 reqpath:n/a 2015-07-02 00:58:59,467 [myid:] - DEBUG [SyncThread:0:FinalRequestProcessor@160] - sessionid:0x14e486028785c81 type:closeSession cxid:0x1 zxid:0x110e7a txntype:-11 reqpath:n/a 2015-07-02 00:58:59,467 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:NIOServerCnxn@1007] - Closed socket connection for client /<client_ip>:47942 which had sessionid 0x14e486028785c81 {noformat} If you see, after adding the credentials to privateCredential set, it takes roughly 10 seconds to reach to session establishment request. From the code it looks like Subject.doAs() is taking a lot of time. I connected it to jdb while it was waiting and got following stacktrace: {noformat} NIOServerCxn.Factory:0.0.0.0/0.0.0.0:58909: [1] java.util.HashMap$TreeNode.find (HashMap.java:1,865) [2] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [3] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [4] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [5] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [6] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [7] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [8] java.util.HashMap$TreeNode.putTreeVal (HashMap.java:1,981) [9] java.util.HashMap.putVal (HashMap.java:637) [10] java.util.HashMap.put (HashMap.java:611) [11] java.util.HashSet.add (HashSet.java:219) [12] javax.security.auth.Subject$ClassSet.populateSet (Subject.java:1,418) [13] javax.security.auth.Subject$ClassSet.<init> (Subject.java:1,372) [14] javax.security.auth.Subject.getPrivateCredentials (Subject.java:767) [15] sun.security.jgss.GSSUtil$1.run (GSSUtil.java:340) [16] sun.security.jgss.GSSUtil$1.run (GSSUtil.java:332) [17] java.security.AccessController.doPrivileged (native method) [18] sun.security.jgss.GSSUtil.searchSubject (GSSUtil.java:332) [19] sun.security.jgss.wrapper.NativeGSSFactory.getCredFromSubject (NativeGSSFactory.java:53) [20] sun.security.jgss.wrapper.NativeGSSFactory.getCredentialElement (NativeGSSFactory.java:116) [21] sun.security.jgss.GSSManagerImpl.getCredentialElement (GSSManagerImpl.java:193) [22] sun.security.jgss.GSSCredentialImpl.add (GSSCredentialImpl.java:427) [23] sun.security.jgss.GSSCredentialImpl.<init> (GSSCredentialImpl.java:62) [24] sun.security.jgss.GSSManagerImpl.createCredential (GSSManagerImpl.java:154) [25] com.sun.security.sasl.gsskerb.GssKrb5Server.<init> (GssKrb5Server.java:108) [26] com.sun.security.sasl.gsskerb.FactoryImpl.createSaslServer (FactoryImpl.java:85) [27] javax.security.sasl.Sasl.createSaslServer (Sasl.java:524) [28] org.apache.zookeeper.server.ZooKeeperSaslServer$1.run (ZooKeeperSaslServer.java:118) [29] org.apache.zookeeper.server.ZooKeeperSaslServer$1.run (ZooKeeperSaslServer.java:114) [30] java.security.AccessController.doPrivileged (native method) [31] javax.security.auth.Subject.doAs (Subject.java:422) [32] org.apache.zookeeper.server.ZooKeeperSaslServer.createSaslServer (ZooKeeperSaslServer.java:114) [33] org.apache.zookeeper.server.ZooKeeperSaslServer.<init> (ZooKeeperSaslServer.java:48) [34] org.apache.zookeeper.server.NIOServerCnxn.<init> (NIOServerCnxn.java:100) [35] org.apache.zookeeper.server.NIOServerCnxnFactory.createConnection (NIOServerCnxnFactory.java:161) [36] org.apache.zookeeper.server.NIOServerCnxnFactory.run (NIOServerCnxnFactory.java:202) [37] java.lang.Thread.run (Thread.java:745) {noformat} This doesn't happen when we use JGSS, I think because adding credential to privateCredential set for every connection is causing Subject.doAS() to take much longer time. |
patch | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 6 weeks ago | Fix slowness in connections when setup with native GSSAPI. | kerberos, native-gssapi | 0|i38y6f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2669 | follower failed to reconnect to leader after a network error |
Bug | Open | Major | Unresolved | Unassigned | Zhenghua Chen | Zhenghua Chen | 19/Jan/17 06:30 | 29/Jul/19 14:44 | 3.4.9 | quorum, server | 0 | 9 | CentOS7 | We have a zookeeper cluster with 3 nodes named s1, s2, s3 By mistake, we shut down the ethernet interface of s2, and zk follower shut down(zk process remains there) Later, after ethernet up again, s2 failed to reconnect to leader s3 to be a follower follower s2 keeps printing log like this: {quote} 2017-01-19 16:40:58,956 WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:7181] o.a.z.s.q.Learner - Got zxid 0x320001019f expected 0x1 2017-01-19 16:40:58,956 ERROR [SyncThread:1] o.a.z.s.ZooKeeperCriticalThread - Severe unrecoverable error, from thread : SyncThread:1 java.nio.channels.ClosedChannelException: null at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99) at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:250) at org.apache.zookeeper.server.persistence.Util.padLogFile(Util.java:215) at org.apache.zookeeper.server.persistence.FileTxnLog.padFile(FileTxnLog.java:241) at org.apache.zookeeper.server.persistence.FileTxnLog.append(FileTxnLog.java:219) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.append(FileTxnSnapLog.java:314) at org.apache.zookeeper.server.ZKDatabase.append(ZKDatabase.java:470) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:140) 2017-01-19 16:40:58,956 INFO [SyncThread:1] o.a.z.s.ZooKeeperServerListenerImpl - Thread SyncThread:1 exits, error code 1 2017-01-19 16:40:58,956 INFO [SyncThread:1] o.a.z.s.SyncRequestProcessor - SyncRequestProcessor exited! 2017-01-19 16:40:58,957 INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:7181] o.a.z.s.q.Learner - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:164) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850) {quote} And, leader s3 keeps printing log like this: {quote} 2017-01-19 16:30:50,452 INFO [LearnerHandler-/192.168.40.51:35949] o.a.z.s.q.LearnerHandler - Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@95258f0 2017-01-19 16:30:50,452 INFO [LearnerHandler-/192.168.40.51:35949] o.a.z.s.q.LearnerHandler - Synchronizing with Follower sid: 1 maxCommittedLog=0x320001019e minCommittedLog=0x320000ffaa peerLastZxid=0x2300000000 2017-01-19 16:30:50,453 WARN [LearnerHandler-/192.168.40.51:35949] o.a.z.s.q.LearnerHandler - Unhandled proposal scenario 2017-01-19 16:30:50,453 INFO [LearnerHandler-/192.168.40.51:35949] o.a.z.s.q.LearnerHandler - Sending SNAP 2017-01-19 16:30:50,453 INFO [LearnerHandler-/192.168.40.51:35949] o.a.z.s.q.LearnerHandler - Sending snapshot last zxid of peer is 0x2300000000 zxid of leader is 0x320001019esent zxid of db as 0x320001019e 2017-01-19 16:30:50,461 INFO [LearnerHandler-/192.168.40.51:35949] o.a.z.s.q.LearnerHandler - Received NEWLEADER-ACK message from 1 2017-01-19 16:30:51,738 ERROR [LearnerHandler-/192.168.40.51:35934] o.a.z.s.q.LearnerHandler - Unexpected exception causing shutdown while sock still open java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:542) {quote} we execute netstat, found lots of close wait socket in s2, and never closed. {quote} tcp6 10865 0 192.168.40.51:47181 192.168.40.57:7288 CLOSE_WAIT 2217/java tcp6 2576 0 192.168.40.51:57181 192.168.40.57:7288 CLOSE_WAIT 2217/java {quote} seems that s2 has a connection leak. after restart zk process of s2, it works fine. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 33 weeks, 3 days ago | 0|i38xdz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2668 | Remove reference to requireClientAuthScheme from https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication |
Bug | Open | Major | Unresolved | Unassigned | Devaraj Das | Devaraj Das | 18/Jan/17 17:55 | 17/Sep/19 11:28 | documentation | 0 | 6 | I was trying to see if ZK can be configured to always do client authentication (globally and not per znode). I came across this https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication and it describes a config key requireClientAuthScheme that when set to 'sasl' should do the job. But upon looking at code (in master and in branch-3.5), I don't see any reference to it. Raising this jira to update the wiki (assuming I am on the right track). There are probably ways to update the wiki otherwise but I wanted to get some attention on this before we did that. (cc [~phunt], [~ekoontz]). |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 26 weeks, 2 days ago | 0|i38wf3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2667 | NPE in the patch for ZOOKEEPER-2139 when multiple connections are made |
Bug | Resolved | Major | Invalid | Unassigned | Hari Krishna Dara | Hari Krishna Dara | 17/Jan/17 06:35 | 18/Jan/17 07:23 | 18/Jan/17 07:23 | 3.5.2, 3.6.0 | java client | 0 | 3 | ZOOKEEPER-2139 | ZOOKEEPER-2139 added support for connecting to multiple ZK services, but this also introduced a bug that causes a cryptic NPE. The client sees the below sort of error messages: {noformat} Exception while trying to create SASL client: java.lang.NullPointerException SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: saslClient failed to initialize properly: it's null. Error while calling watcher java.lang.NullPointerException at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:581) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:532) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:40) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:579) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:554) {noformat} The line at {{ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:581)}} points to the middle line below, where {{event.getState()}} is {{null}}: {noformat} private void connectionEvent(WatchedEvent event) { switch(event.getState()) { case SyncConnected: {noformat} However, the event's state is {{null}} because of a couple of other bugs, particularly an NPE that gets a mention in the log without a stacktrace. This first NPE causes an incorrect initialization of the event and results in the second NPE with the stacktrace. The reason for the first NPE comes from this code in {{ZookeeperSaslClient}}: {noformat} if (!initializedLogin) { ... } Subject subject = login.getSubject(); {noformat} Before the patch for ZOOKEEPER-2139, both the {{login}} and {{initializedLogin}} were {{static}} fields of {{ZookeeperSaslClient}}. To support multiple ZK clients, the {{login}} field was changed from {{static}} to instance field, however the {{initializedLogin}} field was left as {{static}} field. Because of this, the subsequent attempts to connect to ZK think that the login doesn't need to be done and go ahead and blindly use the {{login}} variable which causes the NPE. At the core, the fix is simply to change {{initializedLogin}} to instance variable, but we have made a few additional changes to improve the logging and handle state. I will attach a patch soon. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 9 weeks, 1 day ago | 0|i38t5j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2666 | the watch function called many times when it should be called once |
Bug | Open | Major | Unresolved | Unassigned | ZHE CHEN | ZHE CHEN | 16/Jan/17 00:51 | 23/Jan/17 17:38 | 3.4.5 | c client, server | 0 | 3 | ubuntu 14.04 zookeeper 3.4.5, it's installed by apt-get |
We have a service A, which has 2 instances A1 and A2. We also have another 2 services, B and C. B has 2 instances B1 and B2. C has 2 instances C1 and C2. A1 and A2 both register child watch for B and C. 2 individual watches, of course. I restart B1 and C1 nearly at the same time. Then, theoretically A1 and A2 both should receive 2 events about the child change of service B and C. However, the real result is, A1 received the 2 children changes of service B and C separately, A2 only received the children change of service B. Moreover, A2 got the children change of service B many many times when service B only changed once at that time (I add auto re-registration so A2 can receive the event more than once). Till now, it only happened once. If it happens again, maybe I will provide some logs. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 8 weeks, 3 days ago | 0|i38qsn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2665 | Port QA github pull request build to branch 3.4 and 3.5 |
Test | Closed | Major | Fixed | Enrico Olivelli | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 14/Jan/17 12:34 | 18/May/17 00:00 | 07/Mar/17 19:33 | 3.4.10, 3.5.3 | build | 0 | 6 | We have QA build for pull requests against master but not against branches 3.4 and 3.5. We need to port the necessary wiring to do it, it shouldn't be difficult. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 44 weeks ago | 0|i38pun: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2664 | ClientPortBindTest#testBindByAddress may fail due to "No such device" exception |
Test | Closed | Major | Fixed | Ted Yu | Ted Yu | Ted Yu | 13/Jan/17 05:04 | 31/Mar/17 05:01 | 14/Jan/17 00:32 | 3.4.6 | 3.4.10, 3.5.3, 3.6.0 | 0 | 4 | ZOOKEEPER-2395 | Saw the following in a recent run: {code} Stacktrace java.net.SocketException: No such device at java.net.NetworkInterface.isLoopback0(Native Method) at java.net.NetworkInterface.isLoopback(NetworkInterface.java:390) at org.apache.zookeeper.test.ClientPortBindTest.testBindByAddress(ClientPortBindTest.java:61) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) Standard Output 2017-01-12 23:20:43,792 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testBindByAddress 2017-01-12 23:20:43,795 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@50] - RUNNING TEST METHOD testBindByAddress 2017-01-12 23:20:43,799 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@62] - TEST METHOD FAILED testBindByAddress java.net.SocketException: No such device at java.net.NetworkInterface.isLoopback0(Native Method) at java.net.NetworkInterface.isLoopback(NetworkInterface.java:390) at org.apache.zookeeper.test.ClientPortBindTest.testBindByAddress(ClientPortBindTest.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:532) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1179) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1030) {code} Proposed fix is to catch exception from isLoopback() call. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 8 weeks, 3 days ago | 0|i38ngn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2663 | Enable remote jmx, zkCli.sh start failed with jmx communication error |
Bug | Open | Major | Unresolved | Unassigned | linbo.liao | linbo.liao | 11/Jan/17 08:41 | 11/Jan/17 08:41 | 3.4.9 | jmx | 0 | 2 | OS: Centos 6.7 x86_64 Zookeeper: 3.4.9 Java: HotSpot 1.8.0_65-b17 |
My laptop is Macbook Pro with macOS Sierra (IP: 192.168.2.102). An VM (IP: 192.168.2.107) is running on VirtualBox. Deploy zookeeper-3.4.9 on VM, enable the remote JMX with option: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8415 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.rmi.port=8415 -Djava.rmi.server.hostname=192.168.2.107 Test with jconsole on Mac, connect 192.168.2.107:8415 works fine. Runnign zkCli.sh failed $ bin/zkCli.sh Error: JMX connector server communication error: service:jmx:rmi://localhost.localdomain:8415 $ cat /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 10 weeks, 1 day ago | 0|i38jzz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2662 | Export a metric for txn log sync times |
Improvement | Resolved | Major | Fixed | Edward Ribeiro | Andrew Kyle Purtell | Andrew Kyle Purtell | 10/Jan/17 21:08 | 28/Apr/17 12:16 | 28/Apr/17 11:58 | 3.5.4, 3.6.0 | 0 | 7 | In FileTxnLog there is code that records the amount of time required to fsync the txn log in order to warn if that time exceeds a configurable threshold. This information should also be exported as a metric available by JMX so an important aspect of quorum performance can be monitored. ZooKeeperServerMXBean carries some global latency information for the server process already, seems like a good place to put it if not an entirely new bean for the TxnLog. After ZOOKEEPER-2310 might want to collect the same information for snapshots. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 46 weeks, 6 days ago | 0|i38j3z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2661 | It costs about 5055 ms to create Zookeeper object for the first time. |
Bug | Resolved | Major | Not A Problem | Unassigned | Yaohui Wu | Yaohui Wu | 10/Jan/17 06:20 | 23/Jan/17 14:46 | 11/Jan/17 03:58 | 3.4.6 | java client | 0 | 3 | See the description below. | I create and close ZooKeeper for 10 times. It costs about 5055 ms for the first time. See attached files for some test code and output. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 8 weeks, 3 days ago | 0|i38hon: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2660 | acceptedEpoch and currentEpoch data inconsistency, ZK process can not start! |
Bug | Resolved | Major | Duplicate | Unassigned | Yongcheng Liu | Yongcheng Liu | 07/Jan/17 08:14 | 20/Jan/17 13:03 | 20/Jan/17 13:03 | 3.4.6, 3.4.9 | quorum | 0 | 3 | ZOOKEEPER-2307 | ZK: 3.4.9 | 1. currentEpoch is bigger than acceptedEpoch, ZK will throw IOException when start loadDataBase. 2. function bug. In function setAcceptedEpoch and setCurrentEpoch, it is modify memory variable first, then write epoch to file. If write file failed, the memory has been modified. solution as follow: for example, public void setAcceptedEpoch(long e) throws IOException { acceptedEpoch = e; writeLongToFile(ACCEPTED_EPOCH_FILENAME, e); } need to modify as follow: public void setAcceptedEpoch(long e) throws IOException { writeLongToFile(ACCEPTED_EPOCH_FILENAME, e); acceptedEpoch = e; } |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 8 weeks, 6 days ago | 0|i38eb3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2659 | Use log4j2 as a logging framework as log4j 1.X is now deprecated |
Wish | Resolved | Minor | Won't Do | Pushkar Raste | Pushkar Raste | Pushkar Raste | 06/Jan/17 14:37 | 30/Jan/19 08:36 | 17/Mar/17 15:03 | 1 | 5 | 0 | 600 | ZOOKEEPER-2342 | Zookeeper currently uses {{log4j 1.X}} as the default logging framework. {{log4j 1.X}} is now deprecated http://logging.apache.org/log4j/1.2/ This ticket is to track efforts to move zookeeper to {{log4j2}} |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 50 weeks ago | 0|i38dcv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2658 | Trunk / branch-3.5 build broken. |
Task | Closed | Critical | Fixed | Michael Han | Michael Han | Michael Han | 03/Jan/17 12:35 | 17/May/17 23:43 | 09/Jan/17 17:33 | 3.5.2, 3.6.0 | 3.5.3, 3.6.0 | 0 | 1 | https://builds.apache.org/job/ZooKeeper-trunk-openjdk7/ https://builds.apache.org/job/ZooKeeper_branch35_openjdk7/ The trunk build is broken for over two weeks. It is likely caused by Infrastructure issues. {noformat} [ZooKeeper-trunk-openjdk7] $ /home/jenkins/tools/ant/latest/bin/ant -Dtest.output=yes -Dtest.junit.threads=8 -Dtest.junit.output.format=xml -Djavac.target=1.7 clean test-core-java Error: JAVA_HOME is not defined correctly. We cannot execute /usr/lib/jvm/java-7-openjdk-amd64//bin/java Build step 'Invoke Ant' marked build as failure Recording test results ERROR: Step ?Publish JUnit test result report? failed: No test report files were found. Configuration error? Email was triggered for: Failure - Any Sending email for trigger: Failure - Any {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 10 weeks, 3 days ago | 0|i387lb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2657 | Using zookeeper without SASL causes error logging |
Improvement | Open | Major | Unresolved | Unassigned | Aseem Bansal | Aseem Bansal | 29/Dec/16 05:07 | 29/Dec/16 05:07 | 3.4.6 | 0 | 2 | We are using Kafka which uses zookeeper. But we are not using SASL. So we keep on getting {noformat} CRITICAL: Found 32 lines (limit=1/1): (1) 2016-12-16 07:02:14.780 [INFO ] [r] org.apache.zookeeper.ClientCnxn [] - Opening socket connection to server 10.0.1.47/10.0.1.47:2181. Will not attempt to authenticate using SASL (unknown error) {noformat} Found http://stackoverflow.com/a/26532778/2235567 Looked and found this based on the above https://svn.apache.org/repos/asf/zookeeper/trunk/src/java/main/org/apache/zookeeper/client/ZooKeeperSaslClient.java Searched for "Will not attempt to authenticate using SASL" and found the "unknown error". Can the message be changed so that the word error is not there as it is not really an error? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 12 weeks ago | 0|i383of: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2656 | Fix ServerConfigTest#testValidArguments test case failures |
Test | Closed | Major | Fixed | Michael Han | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 22/Dec/16 23:21 | 31/Mar/17 05:01 | 06/Jan/17 07:27 | 3.4.10, 3.5.3, 3.6.0 | 0 | 4 | ZOOKEEPER-2470 | This jira to fix ServerConfigTest#testValidArguments test case failure. Reference: https://builds.apache.org/job/ZooKeeper-trunk/3207/testReport/org.apache.zookeeper/ServerConfigTest/testValidArguments/ {code} Error Message expected: java.lang.String</data/dir> but was: java.io.File</data/dir> Stacktrace junit.framework.AssertionFailedError: expected: java.lang.String</data/dir> but was: java.io.File</data/dir> at org.apache.zookeeper.ServerConfigTest.testValidArguments(ServerConfigTest.java:48) {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 10 weeks, 6 days ago | 0|i37yvr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2655 | Improve NIOServerCnxn#isZKServerRunning to reflect the semantics correctly |
Improvement | Closed | Minor | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 22/Dec/16 13:34 | 17/May/17 23:43 | 22/Dec/16 17:24 | 3.5.3, 3.6.0 | server | 0 | 4 | ZOOKEEPER-2383 | This jira to improve the semantics of the following internal functions to make it more readable: # {{NIOServerCnxn#isZKServerRunning()}} => return true if the server is running, false otherwise. # {{AbstractFourLetterCommand#isZKServerRunning()}} => return true if the server is running, false otherwise. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 12 weeks, 6 days ago | 0|i37ybr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2654 | Support Fedora 25: use pkg-config instead of obsolete M4 macros |
Bug | Patch Available | Major | Unresolved | Olaf Flebbe | Olaf Flebbe | Olaf Flebbe | 22/Dec/16 12:40 | 22/Dec/16 13:14 | build | 0 | 1 | BIGTOP-2642 | While compiling Bigtop on Fedora 25 we found that there is an issue with the autoconf detection of cppunit: See BIGTOP-2642 for error. Some background regarding the issue can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=1311694 The fedora maintainers encourage use of pkg-config rather crufty *.m4 autoconf magic by only supplying pkg-config files *.pc. The patch is surprisingly easy but adds the additional requirement for pkg-config which should be available on every well maintained system for ages. Please see for me proposed patch. Works for me for Fedora 25, Centos 6, MacOSX with HomeBrew. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 13 weeks ago | 0|i37y9j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2653 | epoch files do not match snapshots and logs |
Bug | Open | Major | Unresolved | Unassigned | Lasaro Camargos | Lasaro Camargos | 22/Dec/16 12:03 | 22/Dec/16 12:03 | 3.4.9 | 0 | 2 | Linux 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux (centos) | Hi all. After shutting zk down and upgrading to centos 7, ZK would not start with exception Removing file: Dec 19, 2016 10:55:08 PM /hedvig/hpod/log/version-2/log.300ee0308 Removing file: Dec 19, 2016 7:11:23 PM /hedvig/hpod/data/version-2/snapshot.300ee0307 java.lang.RuntimeException: Unable to run quorum server at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:558) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at com.hedvig.hpod.service.PodnetService$1.run(PodnetService.java:2262) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: The current epoch, 3, is older than the last zxid, 17179871862 at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:539) ... 4 more All logs are empty, and the following snapshot and commit logs exist find . . ./log ./log/version-2 ./log/version-2/log.40000010a ./log/version-2/log.300ef712b ./log/version-2/log.300f0659e ./log/version-2/.ignore ./data ./data/version-2 ./data/version-2/snapshot.400000109 ./data/version-2/currentEpoch ./data/version-2/acceptedEpoch ./data/version-2/snapshot.300ef712a ./data/version-2/snapshot.300f0659d ./data/myid.bak ./data/myid On other nodes we had the same exception but no commit log deletion. java.lang.RuntimeException: Unable to run quorum server at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:558) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at com.hedvig.hpod.service.PodnetService$1.run(PodnetService.java:2262) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: The current epoch, 3, is older than the last zxid, 17179871862 at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:539) ./log ./log/version-2 ./log/version-2/log.300f06cfc ./log/version-2/log.300f03890 ./log/version-2/.ignore ./data ./data/version-2 ./data/version-2/snapshot.300f06cfb ./data/version-2/snapshot.300f06f10 ./data/version-2/currentEpoch ./data/version-2/acceptedEpoch ./data/version-2/snapshot.300f0388f ./data/myid.bak ./data/myid /log ./log/version-2 ./log/version-2/log.300f06dbf ./log/version-2/log.300ed96fc ./log/version-2/log.300ef1048 ./log/version-2/.ignore ./data ./data/version-2 ./data/version-2/snapshot.300f06dbe ./data/version-2/currentEpoch ./data/version-2/acceptedEpoch ./data/version-2/snapshot.300ed96fb ./data/version-2/snapshot.300ef1048 ./data/myid.bak ./data/myid The symptoms look like ZOOKEEPER-1549, but we are running 3.4.9 here. Any ideas? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 13 weeks ago | 0|i37y7z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2652 | Fix HierarchicalQuorumTest.java |
Bug | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 22/Dec/16 01:15 | 31/Mar/17 05:01 | 22/Dec/16 01:34 | 3.4.10 | 3.4.10 | 0 | 4 | The commit of ZOOKEEPER-2479 has introduced a compilation error(due to diamond operator usage) in {{branch-3.4}}, which uses {{JDK 1.6}} | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 12 weeks, 6 days ago | 0|i37x4v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2651 | Missing src/pom.template in release |
Bug | Closed | Major | Fixed | Rakesh Radhakrishnan | Christopher Tubbs | Christopher Tubbs | 21/Dec/16 20:47 | 31/Mar/17 05:01 | 11/Jan/17 00:28 | 3.4.9, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | build | 0 | 6 | Trying to build downstream in Fedora, and discovered that the 3.4.9 release tarball is missing the {{src/pom.template}} file. It is present in the {{release-3.4.9}} tag, so I grabbed it from there to patch downstream. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 10 weeks, 1 day ago | 0|i37wvz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2650 | Test Improvement by adding more QuorumPeer Auth related test cases |
Test | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 21/Dec/16 10:11 | 31/Mar/17 05:01 | 04/Jan/17 00:30 | 3.4.10 | 0 | 4 | ZOOKEEPER-1045 | This jira to add more test cases to the ZOOKEEPER-1045 feature. Cases:- 1) Ensemble with auth enabled Observer. 2) Connecting non-auth Observer to auth enabled quorum. 3) Quorum re-election with auth enabled servers. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 7 weeks, 2 days ago | 0|i37vpb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2649 | The ZooKeeper do not write in log session ID in which the client has been authenticated. |
Improvement | Open | Trivial | Unresolved | Alex Zhou | Alex Zhou | Alex Zhou | 19/Dec/16 10:30 | 05/Feb/20 07:16 | 3.4.9, 3.5.2 | 3.7.0, 3.5.8 | server | 0 | 2 | The ZooKeeper do not write in log session ID in which the client has been authenticated. This occurs for digest and for SASL authentications: bq. 2016-12-09 15:46:34,808 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x158e39a0a960001 with negotiated timeout 30000 for client /0:0:0:0:0:0:0:1:52626 bq. 2016-12-09 15:46:34,838 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler@118] - Successfully authenticated client: authenticationID=bob; authorizationID=bob. bq. 2016-12-09 15:46:34,848 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler@134] - Setting authorizedID: bob bq. 2016-12-09 15:46:34,848 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@1024] - adding SASL authorization for authorizationID: bob bq. 2016-12-13 10:52:54,915 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x158f72acaed0001 with negotiated timeout 30000 for client /172.20.97.175:52217 bq. 2016-12-13 10:52:55,070 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler@118] - Successfully authenticated client: authenticationID=ufm@BILLAB.RU; authorizationID=ufm@BILLAB.RU. bq. 2016-12-13 10:52:55,075 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler@134] - Setting authorizedID: ufm@BILLAB.RU bq. 2016-12-13 10:52:55,075 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@1024] - adding SASL authorization for authorizationID: ufm@BILLAB.RU bq. 2016-12-19 17:43:01,395 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x158fd72521f0000 with negotiated timeout 30000 for client /172.20.97.175:57633 bq. 2016-12-19 17:45:53,497 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@952] - got auth packet /172.20.97.175:57633 bq. 2016-12-19 17:45:53,508 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@986] - auth success /172.20.97.175:57633 So, it is difficult to determine which client made changes. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 13 weeks, 2 days ago | 0|i37ry7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2648 | Container node never gets deleted if it never had children |
Bug | Resolved | Major | Not A Bug | Unassigned | Hadriel Kaplan | Hadriel Kaplan | 18/Dec/16 13:05 | 01/Aug/17 14:35 | 23/Jan/17 17:11 | 3.5.0 | server | 0 | 4 | If a client creates a Container node, but does not also create a child within that Container, the Container will never be deleted. This may seem like a bug in the client for not subsequently creating a child, but we can't assume the client remains connected, or that the client didn't just change its mind (due to some recipe being canceled, for example). The bug is in ContainerManager.getCandidates(), which only considers a node a candidate if its Cversion > 0. The comments indicate this was done intentionally, to avoid a race condition whereby the Container was created right before a cleaning period, and would get cleaned up before the child could be created - so to avoid that the check is performed to verify the Cversion > 0. Instead, I propose that if the Cversion is 0 but the Ctime is more than a checkIntervalMs old, then it be deleted. In other words, if the Container node has been around for a whole cleaning round already and no child has been created since, then go ahead and clean it up. I can provide a patch if others agree with such a change. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 8 weeks, 3 days ago | 0|i37qvz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2647 | Fix TestReconfigServer.cc |
Bug | Closed | Blocker | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 16/Dec/16 06:33 | 17/May/17 23:43 | 05/Jan/17 12:57 | 3.5.3, 3.6.0 | 0 | 5 | The commit of ZOOKEEPER-761 has introduced a compilation error in one of the test cases. It is a pretty straightforward fix. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 11 weeks ago | 0|i37osn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2646 | Java target in branch 3.4 doesn't match documentation |
Bug | Closed | Major | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 15/Dec/16 17:22 | 31/Mar/17 05:01 | 21/Dec/16 23:48 | 3.4.9 | 3.4.10 | 0 | 5 | Need to update build.xml 1.5->1.6. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 7 weeks ago | 0|i37nz3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2645 | If embedded QuorumPeerMain is started with Properties no backupOldConfig should be done |
Bug | Open | Major | Unresolved | Unassigned | Tom Pijl | Tom Pijl | 15/Dec/16 02:36 | 15/Dec/16 03:04 | 3.5.2 | server | 2 | 4 | When starting an embedded _QuorumPeerMain_ by executing the _runFromConfig()_ and providing _QuorumPeerConfig_ properties: {code}standaloneEnabled=false initLimit=5 syncLimit=2 clientPort=4101 server.1=nlbantpijl01.infor.com:2101:3101:participant;4101 dataDir=/Storage/zookeeper/server001{code} an NullPointerException is thrown in the _QuorumPeerConfig_ class in the method _backupOldConfig()_ because the property configFileStr is null. A check must be made at the start of the method _backupOldConfig()_ if the property _configFileStr_ is null. If so just exit the method. In the embedded mode there is no config file, so no need to create a backup. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 14 weeks ago | 0|i37mnz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2644 | contrib/rest does not include rest.sh when packaged |
Improvement | Open | Minor | Unresolved | Unassigned | Minoru Osuka | Minoru Osuka | 12/Dec/16 09:21 | 30/Jan/19 08:16 | 3.4.9, 3.5.2 | contrib | 0 | 3 | 0 | 600 | contrib/rest does not include rest.sh when packaged. I propose to add rest.sh into tar.gz that it make ZooKeeper REST easier to use. | 100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 13 weeks, 5 days ago | 0|i37hen: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2643 | Configurable SSLContext |
Wish | Patch Available | Major | Unresolved | Unassigned | George Goddard | George Goddard | 09/Dec/16 13:47 | 09/Dec/16 13:59 | 0 | 1 | Being able to configure the SSLContext in X509Util.java, ZKConfig.java and NettyServerCnxnFactory.java would add flexibility to use cipher suites other than TLSv1. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 14 weeks, 6 days ago | 0|i37eyn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2642 | ZooKeeper reconfig API backward compatibility fix |
Bug | Closed | Blocker | Fixed | Jordan Zimmerman | Jordan Zimmerman | Jordan Zimmerman | 07/Dec/16 06:23 | 30/Jan/19 11:16 | 11/Feb/17 18:34 | 3.5.2 | 3.5.3, 3.6.0 | c client, java client | 0 | 7 | 0 | 1800 | ZOOKEEPER-2014 | ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. It appears this was done to document that these are methods have access restrictions. However, this change breaks Apache Curator (and possibly other clients). Curator APIs will have to be changed and/or special methods need to be added. A breaking change of this kind should only be done when the benefit is overwhelming. In this case, the same information can be conveyed with documentation and possibly a deprecation notice. Revert the creation of the ZooKeeperAdmin class and move the reconfig() methods back to the ZooKeeper class with additional documentation. |
100% | 100% | 1800 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 7 | 9223372036854775807 | 2 years, 48 weeks, 3 days ago | 0|i37afb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2641 | AvgRequestLatency metric improves to be more accurate |
Improvement | Resolved | Minor | Fixed | maoling | Allen Chan | Allen Chan | 04/Dec/16 23:57 | 20/Jan/19 06:55 | 07/Jan/19 08:39 | 3.4.7, 3.4.9 | 3.6.0 | 1 | 5 | 0 | 15600 | I posted a thread on the mailing list about finding AvgRequestLatency metric to be 0 all the time. I believe this is a valuable metric because it is useful to baseline performance of ZK and know when something is going wrong. Another user (Arshad Mohammad) wrote up these notes. I am not a developer so i do not have ability to patch this. Filing this so hopefully someone with developer abilities can add this improvement. "I find two reason whys AvgRequestLatency is almost all the time 0 1) Ping requests are counted the most: AvgRequestLatency is calculated as AvgRequestLatency=totalLatency/count Ping requests come very often and complete very fast, these request add nothing to totalLatency but add one to count. 2) Wrong data type is chosen to store AvgRequestLatency: AvgRequestLatency is calculated and store as the long value instead of double vlaue. In my opinion ZooKeeper code should be modified to improve this metrics i) Ping request should be ignored while recording the statistics or at least should be configurable whether to ignore or not. If ping request is not counted even other metrics will be more meaningful. ii) AvgRequestLatency should be of double type" |
100% | 100% | 15600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 10 weeks, 3 days ago | 0|i375bb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2640 | fix test coverage for single threaded C-API |
Test | Open | Major | Unresolved | Unassigned | Benjamin Reed | Benjamin Reed | 04/Dec/16 19:25 | 04/Dec/16 19:28 | c client, tests | 0 | 1 | ZOOKEEPER-761 | the tests for the C-API are mostly for the multithreaded API. we need to get better coverage for the single threaded API. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 15 weeks, 4 days ago | 0|i3750v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2639 | Port Quorum Peer mutual authentication SASL feature to branch-3.5 and trunk |
Task | Open | Critical | Unresolved | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 30/Nov/16 01:04 | 14/Dec/19 06:08 | 3.7.0 | quorum, security | 1 | 6 | ZOOKEEPER-2792, ZOOKEEPER-2793, ZOOKEEPER-2794, ZOOKEEPER-2850, ZOOKEEPER-2851, ZOOKEEPER-2935 | ZOOKEEPER-1045 | ZooKeeper server-server mutual authentication is implemented in {{branch-3.4}} using ZOOKEEPER-1045 jira. The feature code is not directly portable to other branches due to code difference. This jira can be used to "forward" port the code changes to {{branch-3.5}} and {{trunk}}. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 17 weeks ago | 0|i36y1j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2638 | ZooKeeper should log which serverCnxnFactory is used during startup |
Improvement | Resolved | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 29/Nov/16 18:34 | 18/Apr/17 21:12 | 18/Apr/17 19:19 | 3.5.2 | 3.5.4, 3.6.0 | 0 | 5 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 48 weeks, 1 day ago | 0|i36xlj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2637 | ZOOKEEPER-2135 NettyNettySuiteHammerTest and NettyNettySuiteTest failures |
Sub-task | Open | Major | Unresolved | Vineet Ghatge | Amita Chaudhary | Amita Chaudhary | 28/Nov/16 08:26 | 19/Mar/17 13:14 | 3.6.0 | tests | 0 | 4 | rhel ppc64le | I am getting test failures related to Netty: [junit] Running org.apache.zookeeper.test.NettyNettySuiteHammerTest [junit] Running org.apache.zookeeper.test.NettyNettySuiteHammerTest [junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec [junit] Test org.apache.zookeeper.test.NettyNettySuiteHammerTest FAILED (crashed) [junit] Running org.apache.zookeeper.test.NettyNettySuiteTest [junit] Tests run: 101, Failures: 0, Errors: 26, Skipped: 0, Time elapsed: 247.238 sec [junit] Test org.apache.zookeeper.test.NettyNettySuiteTest FAILED [junit] Running org.apache.zookeeper.test.NioNettySuiteHammerTest [junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.519 sec on machine rhel, ppc64le. for master branch. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 4 days ago | 0|i36uef: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2636 | Fix C build break. |
Bug | Closed | Blocker | Fixed | Michael Han | Michael Han | Michael Han | 24/Nov/16 16:28 | 17/May/17 23:44 | 25/Nov/16 01:10 | 3.5.3 | 3.5.3, 3.6.0 | jute | 0 | 5 | ZOOKEEPER-2628 | C client build is broken after ZOOKEEPER-2628 is merged in. After a little debug, I find out that the build is broken because the zookeeper.jute.h and zookeeper.jute.c are not completely generated. * The culprit is the code change introduced in ZOOKEEPER-2628, where we wraps {code}JRecord.genCCode{code} with a try / catch / finally block and the file writers were prematurely closed in finally block which prevents remaining of the zookeeper.jute.h/c file being generated. * The fix to {code}JRecord.genCCode{code} in ZOOKEEPER-2628 was made because a find bug warning was directly associated with the code. Due to the subtlety of the file writer ownership, we did not capture the issue during code review. * The build break was not captured in pre-commit builds as well ([an example|https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/72//console]), where we get all tests passed including C client tests. I suspect we might have another bug with cached generated files that should be regenerated but we don't - need more investigation on this one. * The fix is simple by revert the change to this specific method. Findbug does not complain anymore because the previous warning that appertain to this code block was fixed at the call site of {code}JRecord.genCCode{code}. So by reverting the change we still have zero find bug warnings. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 16 weeks, 6 days ago | 0|i36rhr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2635 | Regenerate documentation |
Bug | Closed | Blocker | Fixed | Michael Han | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 24/Nov/16 11:38 | 17/May/17 23:44 | 21/Mar/17 15:10 | 3.5.3, 3.6.0 | documentation | 0 | 5 | Some recent commits did not regenerate the documentation even though they had documentation changes, we need to do it before releasing. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 2 days ago | 0|i36rbz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2634 | null data in zknode data |
Bug | Open | Major | Unresolved | Unassigned | nayeem | nayeem | 20/Nov/16 08:01 | 26/Aug/18 23:09 | 3.4.5 | java client | 0 | 3 | linux zookeeper 3.4.5 | We can create zk node with null data as given bellow. ZkConnect connector = new ZkConnect(); ZooKeeper zk = connector.connect("host:port"); String newNode = "/nayeemDate3"; String strdata = String.valueOf('\u0000'); connector.createNode(newNode, strdata.getBytes()); When we get the data for the zknode 2016-11-17 23:55:48,926 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:NIOServerCnxn@349] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1585061acbd0613, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:745) 2016-11-17 23:55:48,926 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:NIOServerCnxn@1001] - Closed socket connection for client /10.10.72.93:48005 which had sessionid 0x1585061acbd0613 To resolve the issue workaround is to delete the zknode, is it the right behaviour or is this a bug. data from zkcli [zk: 10.10.72.93:5181(CONNECTED) 1] ls /nayeemDate3 [] [zk: 10.10.72.93:5181(CONNECTED) 2] get /nayeemDate3 null cZxid = 0xdc47 ctime = Fri Nov 18 13:29:43 IST 2016 mZxid = 0xdc47 mtime = Fri Nov 18 13:29:43 IST 2016 pZxid = 0xdc47 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 0 numChildren = 0 [zk: 10.10.72.93:5181(CONNECTED) 3] stat /nayeemDate3 cZxid = 0xdc47 ctime = Fri Nov 18 13:29:43 IST 2016 mZxid = 0xdc47 mtime = Fri Nov 18 13:29:43 IST 2016 pZxid = 0xdc47 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 0 numChildren = 0 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 |
Important
|
1 year, 29 weeks, 3 days ago | 0|i36jo7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2633 | Build failure in contrib/zkfuse with gcc 6.x |
Bug | Closed | Minor | Fixed | Raghavendra Prabhu | Raghavendra Prabhu | Raghavendra Prabhu | 17/Nov/16 06:11 | 31/Mar/17 05:01 | 22/Jan/17 19:43 | 3.4.10, 3.5.3, 3.6.0 | contrib-zkfuse | 0 | 6 | gcc --version gcc (GCC) 6.2.1 20160830 Copyright (C) 2016 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. g++ --version g++ (GCC) 6.2.1 20160830 Copyright (C) 2016 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. CFLAGS, CXXFLAGS, and LDFLAGS are unset, hence default. uname -a Linux lative 4.8.8-1-ARCH #1 SMP PREEMPT Tue Nov 15 08:25:24 CET 2016 x86_64 GNU/Linux |
The build in contrib/zkfuse fails with {noformat} make (CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh /home/raghu/zookeeper/src/contrib/zkfuse/missing autoheader) rm -f stamp-h1 touch config.h.in cd . && /bin/sh ./config.status config.h config.status: creating config.h config.status: config.h is unchanged make all-recursive make[1]: Entering directory '/home/raghu/zookeeper/src/contrib/zkfuse' Making all in src make[2]: Entering directory '/home/raghu/zookeeper/src/contrib/zkfuse/src' g++ -DHAVE_CONFIG_H -I. -I.. -I/home/raghu/zookeeper/src/contrib/zkfuse/../../c/include -I/home/raghu/zookeeper/src/contrib/zkfuse/../../c/generated -I../include -I/usr/include -D_FILE_OFFSET_BITS=64 -D_REENTRANT -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong -MT zkfuse.o -MD -MP -MF .deps/zkfuse.Tpo -c -o zkfuse.o zkfuse.cc g++ -DHAVE_CONFIG_H -I. -I.. -I/home/raghu/zookeeper/src/contrib/zkfuse/../../c/include -I/home/raghu/zookeeper/src/contrib/zkfuse/../../c/generated -I../include -I/usr/include -D_FILE_OFFSET_BITS=64 -D_REENTRANT -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong -MT zkadapter.o -MD -MP -MF .deps/zkadapter.Tpo -c -o zkadapter.o zkadapter.cc In file included from zkadapter.h:34:0, from zkadapter.cc:24: event.h:216:9: error: reference to ‘shared_ptr’ is ambiguous shared_ptr<AbstractEventWrapper> m_eventWrapper; ^~~~~~~~~~ In file included from /usr/include/boost/throw_exception.hpp:42:0, from /usr/include/boost/smart_ptr/shared_ptr.hpp:27, from /usr/include/boost/shared_ptr.hpp:17, from event.h:30, from zkadapter.h:34, from zkadapter.cc:24: /usr/include/boost/exception/exception.hpp:148:11: note: candidates are: template<class T> class boost::shared_ptr class shared_ptr; ^~~~~~~~~~ In file included from /usr/include/c++/6.2.1/bits/shared_ptr.h:52:0, from /usr/include/c++/6.2.1/memory:82, from /usr/include/boost/config/no_tr1/memory.hpp:21, from /usr/include/boost/smart_ptr/shared_ptr.hpp:23, from /usr/include/boost/shared_ptr.hpp:17, from event.h:30, from zkadapter.h:34, from zkadapter.cc:24: /usr/include/c++/6.2.1/bits/shared_ptr_base.h:343:11: note: template<class _Tp> class std::shared_ptr class shared_ptr; ^~~~~~~~~~ In file included from zkadapter.h:34:0, from zkadapter.cc:24: event.h: In constructor ‘zkfuse::GenericEvent::GenericEvent(int, zkfuse::AbstractEventWrapper*)’: event.h:189:27: error: class ‘zkfuse::GenericEvent’ does not have any field named ‘m_eventWrapper’ m_type(type), m_eventWrapper(eventWrapper) { ^~~~~~~~~~~~~~ event.h: In member function ‘void* zkfuse::GenericEvent::getEvent() const’: event.h:204:41: error: ‘m_eventWrapper’ was not declared in this scope void *getEvent() const { return m_eventWrapper->getWrapee(); } ^~~~~~~~~~~~~~ In file included from zkadapter.h:34:0, from zkfuse.cc:54: event.h:216:9: error: reference to ‘shared_ptr’ is ambiguous shared_ptr<AbstractEventWrapper> m_eventWrapper; ^~~~~~~~~~ In file included from /usr/include/boost/throw_exception.hpp:42:0, from /usr/include/boost/smart_ptr/detail/shared_count.hpp:27, from /usr/include/boost/smart_ptr/weak_ptr.hpp:17, from /usr/include/boost/weak_ptr.hpp:16, from zkfuse.cc:50: /usr/include/boost/exception/exception.hpp:148:11: note: candidates are: template<class T> class boost::shared_ptr class shared_ptr; ^~~~~~~~~~ In file included from /usr/include/c++/6.2.1/bits/shared_ptr.h:52:0, from /usr/include/c++/6.2.1/memory:82, from /usr/include/boost/smart_ptr/weak_ptr.hpp:16, from /usr/include/boost/weak_ptr.hpp:16, from zkfuse.cc:50: /usr/include/c++/6.2.1/bits/shared_ptr_base.h:343:11: note: template<class _Tp> class std::shared_ptr class shared_ptr; ^~~~~~~~~~ In file included from zkadapter.h:34:0, from zkfuse.cc:54: event.h: In constructor ‘zkfuse::GenericEvent::GenericEvent(int, zkfuse::AbstractEventWrapper*)’: event.h:189:27: error: class ‘zkfuse::GenericEvent’ does not have any field named ‘m_eventWrapper’ m_type(type), m_eventWrapper(eventWrapper) { ^~~~~~~~~~~~~~ event.h: In member function ‘void* zkfuse::GenericEvent::getEvent() const’: event.h:204:41: error: ‘m_eventWrapper’ was not declared in this scope void *getEvent() const { return m_eventWrapper->getWrapee(); } ^~~~~~~~~~~~~~ zkadapter.cc: In member function ‘bool zk::ZooKeeperAdapter::deleteNode(const string&, bool, int)’: zkadapter.cc:676:52: error: no matching function for call to ‘zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::__cxx11::basic_string<char> >&, const string&, bool)’ getNodeChildren( nodeList, path, false ); ^ In file included from zkadapter.cc:24:0: zkadapter.h:440:14: note: candidate: void zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::__cxx11::basic_string<char> >&, const string&, zk::ZKEventListener*, void*) void getNodeChildren(vector<string> &children, ^~~~~~~~~~~~~~~ zkadapter.h:440:14: note: no known conversion for argument 3 from ‘bool’ to ‘zk::ZKEventListener* {aka zkfuse::EventListener<zk::ZKWatcherEvent>*}’ make[2]: *** [Makefile:310: zkadapter.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[2]: *** [Makefile:310: zkfuse.o] Error 1 make[2]: Leaving directory '/home/raghu/zookeeper/src/contrib/zkfuse/src' make[1]: *** [Makefile:352: all-recursive] Error 1 make[1]: Leaving directory '/home/raghu/zookeeper/src/contrib/zkfuse' make: *** [Makefile:293: all] Error 2 ================================================================================================================= make make all-recursive make[1]: Entering directory '/home/raghu/zookeeper/src/contrib/zkfuse' Making all in src make[2]: Entering directory '/home/raghu/zookeeper/src/contrib/zkfuse/src' g++ -DHAVE_CONFIG_H -I. -I.. -I/home/raghu/zookeeper/src/contrib/zkfuse/../../c/include -I/home/raghu/zookeeper/src/contrib/zkfuse/../../c/generated -I../include -I/usr/include -D_FILE_OFFSET_BITS=64 -D_REENTRANT -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong -MT zkfuse.o -MD -MP -MF .deps/zkfuse.Tpo -c -o zkfuse.o zkfuse.cc g++ -DHAVE_CONFIG_H -I. -I.. -I/home/raghu/zookeeper/src/contrib/zkfuse/../../c/include -I/home/raghu/zookeeper/src/contrib/zkfuse/../../c/generated -I../include -I/usr/include -D_FILE_OFFSET_BITS=64 -D_REENTRANT -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong -MT zkadapter.o -MD -MP -MF .deps/zkadapter.Tpo -c -o zkadapter.o zkadapter.cc zkadapter.cc: In member function ‘bool zk::ZooKeeperAdapter::deleteNode(const string&, bool, int)’: zkadapter.cc:676:52: error: no matching function for call to ‘zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::__cxx11::basic_string<char> >&, const string&, bool)’ getNodeChildren( nodeList, path, false ); ^ In file included from zkadapter.cc:24:0: zkadapter.h:440:14: note: candidate: void zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::__cxx11::basic_string<char> >&, const string&, zk::ZKEventListener*, void*) void getNodeChildren(vector<string> &children, ^~~~~~~~~~~~~~~ zkadapter.h:440:14: note: no known conversion for argument 3 from ‘bool’ to ‘zk::ZKEventListener* {aka zkfuse::EventListener<zk::ZKWatcherEvent>*}’ make[2]: *** [Makefile:310: zkadapter.o] Error 1 make[2]: *** Waiting for unfinished jobs.... mv -f .deps/zkfuse.Tpo .deps/zkfuse.Po make[2]: Leaving directory '/home/raghu/zookeeper/src/contrib/zkfuse/src' make[1]: *** [Makefile:352: all-recursive] Error 1 make[1]: Leaving directory '/home/raghu/zookeeper/src/contrib/zkfuse' make: *** [Makefile:293: all] Error 2 {noformat} in two different places. Fixed here: https://github.com/ronin13/zookeeper/commit/726a8eda08e4022fcbcb0581ec2650e07e39910b |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 8 weeks, 1 day ago | 0|i36g6n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2632 | Add option to inform JIRA_PASSWORD at CLI prompt |
Improvement | Resolved | Trivial | Fixed | Edward Ribeiro | Edward Ribeiro | Edward Ribeiro | 14/Nov/16 16:28 | 24/Nov/16 11:14 | 18/Nov/16 17:07 | 3.6.0 | 0 | 3 | Adds the option to prompt for the JIRA password if JIRA_USERNAME is set, but JIRA_PASSWORD is not. Also, asks if the user wants to continue the merge process if the python jira lib is not installed. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 17 weeks ago |
Reviewed
|
0|i36b67: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2631 | Make issue extraction in the git pull request script more robust |
Improvement | Resolved | Major | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 11/Nov/16 09:11 | 30/Jan/19 08:08 | 12/Nov/16 22:55 | 3.6.0 | build | 0 | 4 | 0 | 600 | The QA build is failing for some pull requests because the issue title isn't following the expected format. The issue extraction right now is a bit fragile, so this is to fix the issue. | 100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 18 weeks, 4 days ago | 0|i3682f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2630 | Use interface type instead of implementation type when appropriate. |
Improvement | Resolved | Trivial | Fixed | Tamas Penzes | Michael Han | Michael Han | 10/Nov/16 12:19 | 31/Jan/19 17:13 | 11/Sep/17 12:55 | 3.6.0 | 0 | 5 | 0 | 2400 | There are a couple of places in code base where we declare a field / variable as implementation type (i.e. HashMap, HashSet) instead of interface type (i.e. Map, Set), while in other places we do the opposite by declaring as interface type. A quick check indicates that most if not all of these places could be updated so we have a consistent style over the code base (prefer using interface type), which is also a good coding style to stick per best practice. See more info on https://github.com/apache/zookeeper/pull/102 |
100% | 100% | 2400 | 0 | newbie, pull-request-available, refactoring | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 22 weeks, 6 days ago | 0|i366of: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2629 | Clean up git pull request QA script |
Improvement | Open | Minor | Unresolved | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 06/Nov/16 14:16 | 06/Nov/16 14:17 | 0 | 1 | ZOOKEEPER-2624 | We have introduced a script for QA of pull requests on github in ZOOKEEPER-2624. There is some cleanup left to do on the script, e.g., indentation, consistency of brackets, etc. This jira to do this clean-up. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 19 weeks, 4 days ago | 0|i35zbb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2628 | Investigate and fix findbug warnings |
Bug | Closed | Major | Fixed | Michael Han | Michael Han | Michael Han | 04/Nov/16 20:49 | 17/May/17 23:44 | 24/Nov/16 11:21 | 3.5.2 | 3.5.3, 3.6.0 | 0 | 6 | ZOOKEEPER-2636 | Findbug tool used by Jenkins bot is upgraded to 3.0.1 from 2.0.3 according to Infra team, and this leads to 20 new warnings produced by findbug. The warning reports can be found on [pre commit builds|https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/] with build number >= 3513. These warnings need to be triaged and fixed if they are legitimate. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 17 weeks ago | 0|i35x0n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2627 | Remove ZRWSERVERFOUND from C client and replace handle_error with something more semantically explicit for r/w server reconnect. |
Bug | Closed | Major | Fixed | Michael Han | Michael Han | Michael Han | 03/Nov/16 16:52 | 17/May/17 23:44 | 03/Dec/16 12:17 | 3.5.2 | 3.5.3 | c client | 0 | 5 | While working on ZOOKEEPER-2014, I noticed a discrepancy between Java and C client regarding the error codes definition. There is a {noformat}ZRWSERVERFOUND = -122{noformat} definition in C client which is not present in Java client's KeeperException.Code definitions. This discrepancy was introduced by ZOOKEEPER-827, where the C client logic was simulating the Java client's logic when doing a read/write server search while client is in read only mode. Once client finds a valid read/write server, client will try to disconnect and reconnect with this read/write server, as we always prefer r/w server in ro mode. The way Java client is doing this disconnect/reconnect process is by throwing a RWServerFoundException (instead of a KeeperException) to set the client in disconnected state, then wait for client reconnect with r/w server address set before throwing the exception. C client did similar but instead of having an explicitly disconnect / clean up routine, the client was relying on handle_error to do the job where ZRWSERVERFOUND was introduced. I propose we remove ZRWSERVERFOUND error code from C client and use an explicit routine instead of handle_error when we do r/w server search in C client for two reasons: * ZRWSERVERFOUND is not something ZK client users would need to know. It's a pure implementation detail that's used to alter the connection state of the client, and ZK client users have no desire nor need to handle such errors, as R/W server scanning and connect is handled transparently by ZK client library. * To maintain consistency between Java and C client regarding error codes definition. Without removing this from C client, we would need to replace RWServerFoundException in Java client with a new KeeperException, and again with the reason mentioned above, we don't need a KeeperException for this because such implementation detail does not have to be exposed to end users (unless, we provided alternative for users to opt-out automate R/W server switching when in read only mode which we don't.). |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 15 weeks, 5 days ago | 0|i35tv3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2626 | log4j.properties don't get respected |
Bug | Open | Major | Unresolved | Unassigned | Arne Bachmann | Arne Bachmann | 02/Nov/16 09:04 | 26/Jan/18 11:51 | 3.5.2 | scripts | 0 | 4 | Linux vagrant-ubuntu-trusty-32 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 16:49:53 UTC 2016 i686 i686 i686 GNU/Linux | I put the log4j.properties into the conf folder, plus a symlink to the base zookeeper folder, as described in the documentation. Neither of them seem to be picked up, as my rolling logger is not recognized (no logs created), and also bin/zkServer.sh print-cmd shows wrong logger configuration. Is that a problem of the start script or did I put the properties file into the wrong place? Note however, that also my additional java command-line options (from JAVA_TOOL_OPTIONS) don't get picked up by the start script, as can be seen by ps aux | grep java (e.g. -Xmx1000m instead of -Xmx500 as I defined it). The script's refer to a lot of environment variables that aren't explained in the documentation and nowhere defined; I can't get it to run. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 7 weeks, 6 days ago | 0|i35q6v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2625 | zkServer.sh creates PID file in the folder data?/ instead of data/ |
Bug | Open | Minor | Unresolved | Unassigned | Arne Bachmann | Arne Bachmann | 02/Nov/16 08:59 | 19/Feb/19 16:58 | 3.5.2 | scripts | 0 | 4 | 0 | 5400 | Linux vagrant-ubuntu-trusty-32 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 16:49:53 UTC 2016 i686 i686 i686 GNU/Linux | I provision a vagrant vm that installs zookeeper into /home/vagrant/zk and adjusts all owner and read/write rights. With the vagrant user, I start zookeeper as bin/zkServer.sh start /vagrant/data/zoo.cfg However, the folder data? (or data^M) gets created with the PID inside, instead of putting it into the data folder, which contains the version-2 folder. Since I'm using the official start scripts, I'm at a loss. Also, the data? folder comes with root:root ownership, which is strange, as zKServer.sh is executed from the vagrant user. |
100% | 100% | 5400 | 0 | build, pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 18 weeks, 5 days ago | 0|i35q67: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2624 | Add test script for pull requests |
Improvement | Resolved | Major | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 31/Oct/16 08:12 | 30/Jan/19 08:07 | 06/Nov/16 15:44 | scripts | 0 | 4 | 0 | 600 | ZOOKEEPER-2629 | We need a script similar to {{test-patch.sh}} to handle QA builds for pull requests. | 100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 19 weeks, 4 days ago | 0|i35mav: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2623 | CheckVersion outside of Multi causes NullPointerException |
Bug | Open | Minor | Unresolved | Unassigned | Diego Ongaro | Diego Ongaro | 27/Oct/16 22:29 | 03/Nov/16 09:54 | 0 | 4 | I wasn't sure if check version (opcode 13) was permitted outside of a multi op, so I tried it. My server crashed with a NullPointerException and became unusable until restarted. I guess it's not allowed, but perhaps the server should handle this more gracefully? Here are the server logs: {noformat} Accepted socket connection from /0:0:0:0:0:0:0:1:51737 Session establishment request from client /0:0:0:0:0:0:0:1:51737 client's lastZxid is 0x0 Connection request from old client /0:0:0:0:0:0:0:1:51737; will be dropped if server is in r-o mode Client attempting to establish new session at /0:0:0:0:0:0:0:1:51737 :Fsessionid:0x10025651faa0000 type:createSession cxid:0x0 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Processing request:: sessionid:0x10025651faa0000 type:createSession cxid:0x0 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Got zxid 0x60000065e expected 0x1 Creating new log file: log.60000065e Committing request:: sessionid:0x10025651faa0000 type:createSession cxid:0x0 zxid:0x60000065e txntype:-10 reqpath:n/a Processing request:: sessionid:0x10025651faa0000 type:createSession cxid:0x0 zxid:0x60000065e txntype:-10 reqpath:n/a :Esessionid:0x10025651faa0000 type:createSession cxid:0x0 zxid:0x60000065e txntype:-10 reqpath:n/a sessionid:0x10025651faa0000 type:createSession cxid:0x0 zxid:0x60000065e txntype:-10 reqpath:n/a Add a buffer to outgoingBuffers, sk sun.nio.ch.SelectionKeyImpl@28e9f397 is valid: true Established session 0x10025651faa0000 with negotiated timeout 20000 for client /0:0:0:0:0:0:0:1:51737 :Fsessionid:0x10025651faa0000 type:check cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/ Processing request:: sessionid:0x10025651faa0000 type:check cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/ Processing request:: sessionid:0x10025651faa0000 type:check cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/ Exception causing close of session 0x10025651faa0000: Connection reset by peer :Esessionid:0x10025651faa0000 type:check cxid:0x1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/ IOException stack trace java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:320) at org.apache.zookeeper.server.NIOServerCnxnFactory$IOWorkRequest.doWork(NIOServerCnxnFactory.java:530) at org.apache.zookeeper.server.WorkerService$ScheduledWorkRequest.run(WorkerService.java:162) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Unexpected exception java.lang.NullPointerException at org.apache.zookeeper.server.ZKDatabase.addCommittedProposal(ZKDatabase.java:252) at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:127) at org.apache.zookeeper.server.quorum.CommitProcessor$CommitWorkRequest.doWork(CommitProcessor.java:362) at org.apache.zookeeper.server.WorkerService$ScheduledWorkRequest.run(WorkerService.java:162) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Committing request:: sessionid:0x10025651faa0000 type:error cxid:0x1 zxid:0x60000065f txntype:-1 reqpath:n/a Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1,name2=Follower,name3=Connections,name4="0:0:0:0:0:0:0:1",name5=0x10025651faa0000] Exception thrown by downstream processor, unable to continue. CommitProcessor exited loop! Closed socket connection for client /0:0:0:0:0:0:0:1:51737 which had sessionid 0x10025651faa0000 {noformat} And here's a one-liner to repro, which does a ConnectRequest followed by a {{CheckVersion(path="/", version=89235}}}: {noformat} echo AAAALAAAAAAAAAAAAAAAAAAAJxAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAEQAAAAEAAAANAAAAAS8AAVyT | base64 --decode | nc localhost 2181 >/dev/null {noformat} This is against master as of a couple of weeks ago (f78061a). I haven't checked to see which versions are affected. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 20 weeks ago | 0|i35itj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2622 | ZooTrace.logQuorumPacket does nothing |
Bug | Closed | Trivial | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 26/Oct/16 10:17 | 23/Jan/20 13:16 | 28/Jan/17 00:52 | 3.4.10, 3.5.3, 3.6.0 | 0 | 7 | The method simply returns and there is some code commented out: {code} // if (isTraceEnabled(log, mask)) { // logTraceMessage(LOG, mask, direction + " " // + FollowerHandler.packetToString(qp)); // } {code} There are calls to this trace method, so I think we should fix it. |
newbie | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 7 weeks, 5 days ago | 0|i35f6v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2621 | ZooKeeper doesn't start on MINGW32 (Windows) |
Bug | Closed | Major | Fixed | Amichai Rothman | Amichai Rothman | Amichai Rothman | 26/Oct/16 05:45 | 20/May/19 13:51 | 18/Mar/19 01:27 | 3.4.9 | 3.6.0, 3.5.5, 3.4.15 | scripts | 0 | 6 | 0 | 2400 | MINGW32_NT-6.1 on Windows 7 (e.g. git bash) | The ZooKeeper scripts fail due to missing cygpath path conversion in a MINGW32 environment, such as when running from git bash (installed by default when installing Git for Windows). The fix is to add the line {quote} MINGW*) cygwin=true ;; {quote} near the bottom of the zkEnv.sh script, in the case statement that checks for a cygwin environment. |
100% | 100% | 2400 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 3 days ago | 0|i35eqn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2620 | Add comments to testReadOnlySnapshotDir and testReadOnlyTxnLogDir indicating that the tests will fail when run as root |
Improvement | Closed | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 24/Oct/16 17:03 | 31/Mar/17 05:01 | 05/Jan/17 16:33 | 3.4.9, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | tests | 0 | 4 | testReadOnlySnapshotDir and testReadOnlyTxnLogDir test the impact of changes to file system permissions on ZooKeeper server startup. After debugging test failures [~hanm] was experiencing, we noticed that when the unit tests are run as root, these tests fail. We should have a comment to clarify this. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 11 weeks ago | 0|i35bqf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2619 | Client library reconnecting breaks FIFO client order |
Bug | Open | Major | Unresolved | Unassigned | Diego Ongaro | Diego Ongaro | 21/Oct/16 20:01 | 03/Nov/16 10:54 | 0 | 9 | According to the USENIX ATC 2010 [paper|https://www.usenix.org/conference/usenix-atc-10/zookeeper-wait-free-coordination-internet-scale-systems], ZooKeeper provides "FIFO client order: all requests from a given client are executed in the order that they were sent by the client." I believe applications written using the Java client library are unable to rely on this guarantee, and any current application that does so is broken. Other client libraries are also likely to be affected. Consider this application, which is simplified from the algorithm described on Page 4 (right column) of the paper: {code} zk = new ZooKeeper(...) zk.createAsync("/data-23857", "...", callback) zk.createSync("/pointer", "/data-23857") {code} Assume an empty ZooKeeper database to begin with and no other writers. Applying the above definition, if the ZooKeeper database contains /pointer, it must also contain /data-23857. Now consider this series of unfortunate events: {code} zk = new ZooKeeper(...) // The library establishes a TCP connection. zk.createAsync("/data-23857", "...", callback) // The library/kernel closes the TCP connection because it times out, and // the create of /data-23857 is doomed to fail with ConnectionLoss. Suppose // that it never reaches the server. // The library establishes a new TCP connection. zk.createSync("/pointer", "/data-23857") // The create of /pointer succeeds. {code} That's the problem: subsequent operations get assigned to the new connection and succeed, while earlier operations fail. In general, I believe it's impossible to have a system with the following three properties: # FIFO client order for asynchronous operations, # Failing operations when connections are lost, AND # Transparently reconnecting when connections are lost. To argue this, consider an application that issues a series of pipelined operations, then upon noticing a connection loss, issues a series of recovery operations, repeating the recovery procedure as necessary. If a pipelined operation fails, all subsequent operations in the pipeline must also fail. Yet the client must also carry on eventually: the recovery operations cannot be trivially failed forever. Unfortunately, the client library does not know where the pipelined operations end and the recovery operations begin. At the time of a connection loss, subsequent pipelined operations may or may not be queued in the library; others might be upcoming in the application thread. If the library re-establishes a connection too early, it will send pipelined operations out of FIFO client order. I considered a possible workaround of having the client diligently check its callbacks and watchers for connection loss events, and do its best to stop the subsequent pipelined operations at the first sign of a connection loss. In addition to being a large burden for the application, this does not solve the problem all the time. In particular, if the callback thread is delayed significantly (as can happen due to excessive computation or scheduling hiccups), the application may not learn about the connection loss event until after the connection has been re-established and after dependent pipelined operations have already been transmitted over the new connection. I suggest the following API changes to fix the problem: - Add a method ZooKeeper.getConnection() returning a ZKConnection object. ZKConnection would wrap a TCP connection. It would include all synchronous and asynchronous operations currently defined on the ZooKeeper class. Upon a connection loss on a ZKConnection, all subsequent operations on the same ZKConnection would return a Connection Loss error. Upon noticing, the client would need to call ZooKeeper.getConnection() again to get a working ZKConnection object, and it would execute its recovery procedure on this new connection. - Deprecate all asynchronous methods on the ZooKeeper object. These are unsafe to use if the caller assumes they're getting FIFO client order. - No changes to the protocols or servers are required. I recognize this could cause a lot of code churn for both ZooKeeper and projects that use it. On the other hand, the existing asynchronous calls in applications should now be audited anyhow. The code affected by this issue may be difficult to contain: - It likely affects all ZooKeeper client libraries that provide both asynchronous operations and transparent reconnection. That's probably all versions of the official Java client library, as well as most other client libraries. - It affects all applications using those libraries that depend on the FIFO client order of asynchronous operations. I don't know how common that is, but the paper implies that FIFO client order is important. - Fortunately, the issue can only manifest itself when connections are lost and transparently reestablished. In practice, it may also require a long pipeline or a significant delay in the application thread while the library establishes a new connection. - In case you're wondering, this issue occurred to me while working on a new client library for Go. I haven't seen this issue in the wild, but I was able to produce it locally by placing sleep statements in a Java program and closing its TCP connections. I'm new to this community, so I'm looking forward to the discussion. Let me know if I can clarify any of the above. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 20 weeks ago | 0|i3591r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2618 | ZOOKEEPER-1394 fix ClassNotFoundException on shutdown of client |
Sub-task | Resolved | Minor | Duplicate | wu wen | wu wen | wu wen | 21/Oct/16 11:11 | 06/Dec/16 05:18 | 25/Oct/16 22:24 | 3.4.9 | java client | 0 | 3 | ZOOKEEPER-1394 | see ZOOKEEPER-1394,We also have this issue. 2016-10-21 13:17:21.618 ERROR localhost-startStop-1-SendThread(172.21.134.7:2005) ClientCnxn:414 - from localhost-startStop-1-SendThread(172.21.134.7:2005) java.lang.NoClassDefFoundError: org/apache/zookeeper/server/ZooTrace at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1128) Caused by: java.lang.ClassNotFoundException: Illegal access: this web application instance has been stopped already. Could not load [org.apache.zookeeper.server.ZooTrace]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access. at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1315) at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1178) at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1139) ... 1 more Caused by: java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [org.apache.zookeeper.server.ZooTrace]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access. at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1325) at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1313) ... 3 more |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
3 years, 21 weeks, 1 day ago | 0|i357zj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2617 | correct a few spelling typos |
Bug | Closed | Trivial | Fixed | tony mancill | tony mancill | tony mancill | 18/Oct/16 01:14 | 17/May/17 23:50 | 13/Feb/17 21:45 | 3.4.9 | 3.4.10, 3.5.3, 3.6.0 | 0 | 9 | While working on the Debian packaging of ZooKeeper, some misspellings were detected in the source that affect the documentation, logging, and program output. There is a PR against github containing the patch here: https://github.com/apache/zookeeper/pull/87 |
newbie, patch | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
3 years, 5 weeks, 2 days ago | 0|i350on: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2616 | ZK client fails to connect to ReadOnly server |
Bug | Open | Critical | Unresolved | Unassigned | Benjamin Jaton | Benjamin Jaton | 17/Oct/16 18:08 | 30/Jan/17 15:34 | 3.5.2 | 0 | 5 | Only 1 of the 3 nodes of the ensemble is started. The server successfully started in readonly ("Read-only server started"). {code:title=client}System.setProperty("readonlymode.enabled", "true"); String cs = "QA-E8WIN11:2181,QA-E8WIN12:2181,QA-E8WIN13:2181"; ZooKeeper zk = new ZooKeeper(cs, 30000, null, true); // wait for connection while (!zk.getState().isConnected()) { Thread.sleep(1000); logger.error(zk.getState()); } zk.getData("/", false, new Stat()); logger.error("DONE");{code} The client code above manages to acquire a connection ("CONNECTEDREADONLY") but the subsequent getData fails with ConnectionLoss: {code:title=client log}2016-10-17 14:37:43 ERROR TestCuratorReadOnly:31 - CONNECTEDREADONLY 2016-10-17 14:39:49 ERROR o.a.z.ClientCnxn:526 - Error while calling watcher java.lang.NullPointerException at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:524) [zookeeper-3.5.2-alpha.jar:3.5.2-alpha--1] at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:499) [zookeeper-3.5.2-alpha.jar:3.5.2-alpha--1] Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for / at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1956) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1985) at TestCuratorReadOnly.main(TestCuratorReadOnly.java:33){code} Full server logs are attached, but here are the relevant parts: {code:title=server log} 2016-10-17 14:37:31,375 [myid:1] - INFO [Thread-2:ReadOnlyZooKeeperServer@73] - Read-only server started (...) 2016-10-17 14:37:55,241 [myid:1] - INFO [NIOServerCxnFactory.AcceptThread:/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /10.11.12.4:40800 2016-10-17 14:37:55,250 [myid:1] - INFO [NIOWorkerThread-1:ZooKeeperServer@964] - Client attempting to establish new session at /10.11.12.4:40800 2016-10-17 14:37:55,255 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::ZooKeeperServer@678] - Established session 0x100024619520000 with negotiated timeout 30000 for client /10.11.12.4:40800 (...) [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1,name2=ReadOnlyServer,name3=Connections,name4=10.11.12.4,name5=0x100024619520000] 2016-10-17 14:38:26,929 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::NIOServerCnxn@607] - Closed socket connection for client /10.11.12.4:40800 which had sessionid 0x100024619520000{code} The client and server are using official 3.5.2-alpha. {code:title=zoo.cfg}autopurge.purgeInterval=3 initLimit=10 syncLimit=5 autopurge.snapRetainCount=3 snapCount=10000 minSessionTimeout=5000 maxSessionTimeout=600000 tickTime=2000 admin.commandURL=/commands quorumListenOnAllIPs=true dataDir=C:/workspace/zookeeper-3.5.2-alpha/data admin.serverPort=8080 admin.enableServer=false standaloneEnabled=false dynamicConfigFile=C:/workspace/zookeeper-3.5.2-alpha/conf/zoo.cfg.dynamic.10000046b{code} {code:title=zoo.cfg.dynamic.10000046b}server.1=QA-E8WIN11:2888:3888:participant;0.0.0.0:2181 server.2=QA-E8WIN12:2888:3888:participant;0.0.0.0:2181 server.3=QA-E8WIN13:2888:3888:participant;0.0.0.0:2181{code} |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 3 years, 7 weeks, 3 days ago | 0|i3508f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2615 | Zookeeper server holds onto dead/expired session ids in the watch data structures |
Bug | Open | Major | Unresolved | Camille Fournier | guoping.gp | guoping.gp | 14/Oct/16 00:59 | 05/Feb/20 07:16 | 3.4.6 | 3.7.0, 3.5.8 | server | 0 | 7 | The same issue (https://issues.apache.org/jira/browse/ZOOKEEPER-1382) still can be found even with zookeeper 3.4.6. this issue cause our production zookeeper cluster leak about 1 million watchs, after restart the server one by one, the watch count decrease to only about 40000. I can reproduce the issue on my mac,here it is: ------------------------------------------------------------------------ pguodeMacBook-Air:bin pguo$ echo srvr | nc localhost 6181 Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 GMT Latency min/avg/max: 0/1156/128513 Received: 539 Sent: 531 Connections: 1 Outstanding: 0 Zxid: 0x100000037 Mode: follower Node count: 5 ------------------------ pguodeMacBook-Air:bin pguo$ echo cons | nc localhost 6181 /127.0.0.1:55759[1](queued=0,recved=5,sent=5,sid=0x157be2732d0000e,lop=PING,est=1476372631116,to=15000,lcxid=0x1,lzxid=0xffffffffffffffff,lresp=1476372646260,llat=8,minlat=0,avglat=6,maxlat=17) /0:0:0:0:0:0:0:1:55767[0](queued=0,recved=1,sent=0) ------------------------ pguodeMacBook-Air:bin pguo$ echo wchp | nc localhost 6181 /curator_exists_watch 0x357be48e4d90007 0x357be48e4d90009 0x157be2732d0000e as above 4-letter's report shows, 0x357be48e4d90007 and 0x357be48e4d90009 are leaked after the two sessions expired |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 20 weeks, 6 days ago | 0|i34vnr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2614 | Port ZOOKEEPER-1576 to branch3.4 |
Bug | Resolved | Major | Fixed | Thomas Schüttel | Vishal Khandelwal | Vishal Khandelwal | 12/Oct/16 03:02 | 19/Aug/17 07:24 | 01/Aug/17 11:55 | 3.4.9 | 3.4.11 | 0 | 8 | ZOOKEEPER-1576 handles UnknownHostException and it good to have this change for 3.4 branch as well. Porting the changes to 3.4 after resolving the conflicts | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 30 weeks, 5 days ago | 0|i34rkv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2613 | user-level KeeperException |
Bug | Open | Major | Unresolved | Unassigned | venkata puvvada | venkata puvvada | 11/Oct/16 13:31 | 11/Oct/16 13:31 | 0 | 1 | Apache SOLR 4.10 in Zookeeper | We are facing very big problem in production in Zookeeper. Its been working perfectly from many months, but suddenly without done any change in config. We had below problem. Because of the below problem, we are unable to do anything on this collection. 2016-10-11 09:54:56,448 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@651] - Got user-level KeeperException when processing sessionid:0x156f39ec13e00b9 type:setData cxid:0x4728359 zxid:0xa00560776 txntype:-1 reqpath:n/a Error Path:/solr/configs/constants/managed-schema Error:KeeperErrorCode = BadVersion for /solr/configs/constants/managed-schema ^C |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 23 weeks, 2 days ago | 0|i34qlj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2612 | user-level KeeperException |
Bug | Open | Major | Unresolved | Unassigned | venkata puvvada | venkata puvvada | 11/Oct/16 13:31 | 11/Oct/16 13:31 | 0 | 2 | Apache SOLR 4.10 in Zookeeper | We are facing very big problem in production in Zookeeper. Its been working perfectly from many months, but suddenly without done any change in config. We had below problem. Because of the below problem, we are unable to do anything on this collection. 2016-10-11 09:54:56,448 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@651] - Got user-level KeeperException when processing sessionid:0x156f39ec13e00b9 type:setData cxid:0x4728359 zxid:0xa00560776 txntype:-1 reqpath:n/a Error Path:/solr/configs/constants/managed-schema Error:KeeperErrorCode = BadVersion for /solr/configs/constants/managed-schema ^C |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 23 weeks, 2 days ago | 0|i34qlb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2611 | zoo_remove_watchers - can remove the wrong watch |
Bug | Closed | Critical | Fixed | Eyal leshem | Eyal leshem | Eyal leshem | 09/Oct/16 04:24 | 16/Dec/18 09:50 | 09/Oct/16 16:47 | 3.5.3, 3.6.0 | c client | 0 | 5 | ZOOKEEPER-1887 | The actual problem is in the function "removeWatcherFromList" - That when we check if we need to delete the watch - we compare the WatcherCtx to one node before the one we want to delete.. |
remove_watches | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 22 weeks, 4 days ago | 0|i34mrr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2610 | ZOOKEEPER-3170 Flaky Test: org.apache.zookeeper.test.ReconfigTest.testQuorumSystemChange |
Sub-task | Resolved | Major | Not A Problem | Andor Molnar | Michael Han | Michael Han | 08/Oct/16 14:13 | 19/Dec/19 17:59 | 25/Oct/18 10:36 | 3.5.2 | quorum, server, tests | 0 | 3 | ZOOKEEPER-2135 | {noformat} Regression org.apache.zookeeper.test.ReconfigTest.testQuorumSystemChange (from org.apache.zookeeper.test.NioNettySuiteTest) Failing for the past 1 build (Since Failed#3462 ) Took 2 min 10 sec. Error Message Waiting for server up Stacktrace junit.framework.AssertionFailedError: Waiting for server up at org.apache.zookeeper.test.QuorumUtil.restart(QuorumUtil.java:216) at org.apache.zookeeper.test.ReconfigTest.testQuorumSystemChange(ReconfigTest.java:861) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) Standard Output 2016-10-04 01:04:03,140 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,170 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,179 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,182 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,186 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,190 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,200 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,204 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,208 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,276 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,279 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,282 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,284 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,285 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,286 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,286 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,291 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,292 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-10-04 01:04:03,304 [myid:] - INFO [main:PortAssignment@151] - Test process 3/8 using ports from 16607 - 19299. 2016-10-04 01:04:03,308 [myid:] - INFO [main:PortAssignment@85] - Assigned port 16608 from range 16607 - 19299. 2016-10-04 01:04:03,316 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testAcls 2016-10-04 01:04:03,317 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testAcls 2016-10-04 01:04:03,337 [myid:] - INFO [main:Environment@109] - Server environment:zookeeper.version=3.6.0-SNAPSHOT--1, built on 10/04/2016 00:56 GMT 2016-10-04 01:04:03,337 [myid:] - INFO [main:Environment@109] - Server environment:host.name=asf905.gq1.ygridcore.net 2016-10-04 01:04:03,337 [myid:] - INFO [main:Environment@109] - Server environment:java.version=1.7.0_80 2016-10-04 01:04:03,337 [myid:] - INFO [main:Environment@109] - Server environment:java.vendor=Oracle Corporation 2016-10-04 01:04:03,337 [myid:] - INFO [main:Environment@109] - Server environment:java.home=/usr/local/asfpackages/java/jdk1.7.0_80/jre 2016-10-04 01:04:03,337 [myid:] - INFO [main:Environment@109] - Server environment:java.class.path=/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/classes:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/classes:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/apache-rat-core-0.10.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/apache-rat-tasks-0.10.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-compress-1.5.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-io-2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-lang-2.6.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/javax.servlet-api-3.1.0.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-http-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-io-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-security-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-server-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-servlet-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-util-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/slf4j-log4j12-1.7.5.jar:/usr/local/asfpackages/ant/apache-ant-1.9.7/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-10-04 01:04:03,338 [myid:] - INFO [main:Environment@109] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-10-04 01:04:03,338 [myid:] - INFO [main:Environment@109] - Server environment:java.io.tmpdir=/tmp 2016-10-04 01:04:03,338 [myid:] - INFO [main:Environment@109] - Server environment:java.compiler=<NA> 2016-10-04 01:04:03,338 [myid:] - INFO [main:Environment@109] - Server environment:os.name=Linux 2016-10-04 01:04:03,339 [myid:] - INFO [main:Environment@109] - Server environment:os.arch=amd64 2016-10-04 01:04:03,339 [myid:] - INFO [main:Environment@109] - Server environment:os.version=3.13.0-95-generic 2016-10-04 01:04:03,339 [myid:] - INFO [main:Environment@109] - Server environment:user.name=jenkins 2016-10-04 01:04:03,339 [myid:] - INFO [main:Environment@109] - Server environment:user.home=/home/jenkins 2016-10-04 01:04:03,339 [myid:] - INFO [main:Environment@109] - Server environment:user.dir=/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test 2016-10-04 01:04:03,339 [myid:] - INFO [main:Environment@109] - Server environment:os.memory.free=475MB 2016-10-04 01:04:03,340 [myid:] - INFO [main:Environment@109] - Server environment:os.memory.max=491MB 2016-10-04 01:04:03,340 [myid:] - INFO [main:Environment@109] - Server environment:os.memory.total=491MB 2016-10-04 01:04:03,357 [myid:] - INFO [main:ZooKeeperServer@889] - minSessionTimeout set to 6000 2016-10-04 01:04:03,357 [myid:] - INFO [main:ZooKeeperServer@898] - maxSessionTimeout set to 60000 2016-10-04 01:04:03,357 [myid:] - INFO [main:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test1469455732852767377.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test1469455732852767377.junit.dir/version-2 2016-10-04 01:04:03,469 [myid:] - INFO [main:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:16608 2016-10-04 01:04:03,509 [myid:] - INFO [main:FileTxnSnapLog@306] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test1469455732852767377.junit.dir/version-2/snapshot.0 2016-10-04 01:04:03,614 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:03,615 [myid:] - INFO [main:ACLTest@108] - starting up the zookeeper server .. waiting 2016-10-04 01:04:03,617 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16608 2016-10-04 01:04:03,652 [myid:] - INFO [New I/O worker #1:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:52750 2016-10-04 01:04:03,660 [myid:] - INFO [New I/O worker #1:StatCommand@49] - Stat command output 2016-10-04 01:04:03,670 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.6.0-SNAPSHOT--1, built on 10/04/2016 00:56 GMT 2016-10-04 01:04:03,671 [myid:] - INFO [main:Environment@109] - Client environment:host.name=asf905.gq1.ygridcore.net 2016-10-04 01:04:03,671 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.7.0_80 2016-10-04 01:04:03,671 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation 2016-10-04 01:04:03,671 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/local/asfpackages/java/jdk1.7.0_80/jre 2016-10-04 01:04:03,671 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/classes:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/classes:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/apache-rat-core-0.10.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/apache-rat-tasks-0.10.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-compress-1.5.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-io-2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/commons-lang-2.6.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/javax.servlet-api-3.1.0.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-http-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-io-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-security-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-server-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-servlet-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jetty-util-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/lib/slf4j-log4j12-1.7.5.jar:/usr/local/asfpackages/ant/apache-ant-1.9.7/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-10-04 01:04:03,672 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-10-04 01:04:03,672 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp 2016-10-04 01:04:03,672 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA> 2016-10-04 01:04:03,672 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux 2016-10-04 01:04:03,672 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64 2016-10-04 01:04:03,672 [myid:] - INFO [main:Environment@109] - Client environment:os.version=3.13.0-95-generic 2016-10-04 01:04:03,673 [myid:] - INFO [main:Environment@109] - Client environment:user.name=jenkins 2016-10-04 01:04:03,673 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/home/jenkins 2016-10-04 01:04:03,673 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test 2016-10-04 01:04:03,673 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=377MB 2016-10-04 01:04:03,673 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=491MB 2016-10-04 01:04:03,674 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=491MB 2016-10-04 01:04:03,676 [myid:] - INFO [main:ZooKeeper@853] - Initiating client connection, connectString=127.0.0.1:16608 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@40ae49d5 2016-10-04 01:04:03,693 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16608. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:04:03,694 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:52752, server: 127.0.0.1/127.0.0.1:16608 2016-10-04 01:04:03,698 [myid:] - INFO [New I/O worker #2:ZooKeeperServer@995] - Client attempting to establish new session at /127.0.0.1:52752 2016-10-04 01:04:03,702 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-10-04 01:04:03,782 [myid:] - INFO [SyncThread:0:ZooKeeperServer@709] - Established session 0x10020f0235b0000 with negotiated timeout 30000 for client /127.0.0.1:52752 2016-10-04 01:04:03,782 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:16608, sessionid = 0x10020f0235b0000, negotiated timeout = 30000 2016-10-04 01:04:03,785 [myid:] - INFO [main:ACLTest@112] - starting creating acls 2016-10-04 01:04:08,518 [myid:] - INFO [main:NettyServerCnxnFactory@464] - shutdown called 0.0.0.0/0.0.0.0:16608 2016-10-04 01:04:08,519 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1231] - Unable to read additional data from server sessionid 0x10020f0235b0000, likely server has closed socket, closing socket connection and attempting reconnect 2016-10-04 01:04:08,521 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16608,name1=Connections,name2=127.0.0.1,name3=0x10020f0235b0000] 2016-10-04 01:04:08,536 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-10-04 01:04:08,536 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:08,536 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-10-04 01:04:08,536 [myid:] - INFO [main:PrepRequestProcessor@975] - Shutting down 2016-10-04 01:04:08,537 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-10-04 01:04:08,537 [myid:] - INFO [ProcessThread(sid:0 cport:16608)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-10-04 01:04:08,537 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-10-04 01:04:08,537 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-10-04 01:04:08,538 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16608,name1=InMemoryDataTree] 2016-10-04 01:04:08,538 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16608] 2016-10-04 01:04:08,538 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16608 2016-10-04 01:04:08,539 [myid:] - INFO [main:ZooKeeperServer@889] - minSessionTimeout set to 6000 2016-10-04 01:04:08,539 [myid:] - INFO [main:ZooKeeperServer@898] - maxSessionTimeout set to 60000 2016-10-04 01:04:08,540 [myid:] - INFO [main:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test1469455732852767377.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test1469455732852767377.junit.dir/version-2 2016-10-04 01:04:08,565 [myid:] - INFO [main:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:16608 2016-10-04 01:04:08,567 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test1469455732852767377.junit.dir/version-2/snapshot.0 2016-10-04 01:04:08,598 [myid:] - INFO [main:FileTxnSnapLog@306] - Snapshotting: 0xc9 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test1469455732852767377.junit.dir/version-2/snapshot.c9 2016-10-04 01:04:08,609 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:08,609 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16608 2016-10-04 01:04:08,610 [myid:] - INFO [New I/O worker #34:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:52862 2016-10-04 01:04:08,610 [myid:] - INFO [New I/O worker #34:StatCommand@49] - Stat command output 2016-10-04 01:04:08,611 [myid:] - INFO [main:ZooKeeper@853] - Initiating client connection, connectString=127.0.0.1:16608 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@19450f1a 2016-10-04 01:04:08,612 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16608. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:04:08,613 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:52863, server: 127.0.0.1/127.0.0.1:16608 2016-10-04 01:04:08,614 [myid:] - INFO [New I/O worker #35:ZooKeeperServer@995] - Client attempting to establish new session at /127.0.0.1:52863 2016-10-04 01:04:08,615 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.ca 2016-10-04 01:04:08,637 [myid:] - INFO [SyncThread:0:ZooKeeperServer@709] - Established session 0x10020f0373e0000 with negotiated timeout 30000 for client /127.0.0.1:52863 2016-10-04 01:04:08,638 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:16608, sessionid = 0x10020f0373e0000, negotiated timeout = 30000 2016-10-04 01:04:08,681 [myid:] - INFO [ProcessThread(sid:0 cport:16608)::PrepRequestProcessor@657] - Processed session termination for sessionid: 0x10020f0373e0000 2016-10-04 01:04:08,690 [myid:] - INFO [main:ZooKeeper@1311] - Session: 0x10020f0373e0000 closed 2016-10-04 01:04:08,690 [myid:] - INFO [SyncThread:0:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16608,name1=Connections,name2=127.0.0.1,name3=0x10020f0373e0000] 2016-10-04 01:04:08,690 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10020f0373e0000 2016-10-04 01:04:08,691 [myid:] - INFO [main:NettyServerCnxnFactory@464] - shutdown called 0.0.0.0/0.0.0.0:16608 2016-10-04 01:04:08,717 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-10-04 01:04:08,717 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:08,717 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-10-04 01:04:08,717 [myid:] - INFO [main:PrepRequestProcessor@975] - Shutting down 2016-10-04 01:04:08,718 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-10-04 01:04:08,718 [myid:] - INFO [ProcessThread(sid:0 cport:16608)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-10-04 01:04:08,718 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-10-04 01:04:08,718 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-10-04 01:04:08,719 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16608,name1=InMemoryDataTree] 2016-10-04 01:04:08,719 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16608] 2016-10-04 01:04:08,720 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16608 2016-10-04 01:04:08,721 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 39623 2016-10-04 01:04:08,721 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 9 2016-10-04 01:04:08,721 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testAcls 2016-10-04 01:04:08,721 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testAcls 2016-10-04 01:04:08,722 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testAcls 2016-10-04 01:04:08,723 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testIPAuthenticationIsValidCIDR 2016-10-04 01:04:08,723 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testIPAuthenticationIsValidCIDR 2016-10-04 01:04:08,724 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 39623 2016-10-04 01:04:08,724 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 9 2016-10-04 01:04:08,724 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testIPAuthenticationIsValidCIDR 2016-10-04 01:04:08,725 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testIPAuthenticationIsValidCIDR 2016-10-04 01:04:08,725 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testIPAuthenticationIsValidCIDR 2016-10-04 01:04:08,741 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testDisconnectedAddAuth 2016-10-04 01:04:08,741 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testDisconnectedAddAuth 2016-10-04 01:04:08,742 [myid:] - INFO [main:ZooKeeperServer@889] - minSessionTimeout set to 6000 2016-10-04 01:04:08,742 [myid:] - INFO [main:ZooKeeperServer@898] - maxSessionTimeout set to 60000 2016-10-04 01:04:08,743 [myid:] - INFO [main:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test6791388306054347439.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test6791388306054347439.junit.dir/version-2 2016-10-04 01:04:08,751 [myid:] - INFO [main:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:16608 2016-10-04 01:04:08,753 [myid:] - INFO [main:FileTxnSnapLog@306] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test6791388306054347439.junit.dir/version-2/snapshot.0 2016-10-04 01:04:08,755 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:08,755 [myid:] - INFO [main:ACLTest@72] - starting up the zookeeper server .. waiting 2016-10-04 01:04:08,755 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16608 2016-10-04 01:04:08,757 [myid:] - INFO [New I/O worker #67:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:52865 2016-10-04 01:04:08,757 [myid:] - INFO [New I/O worker #67:StatCommand@49] - Stat command output 2016-10-04 01:04:08,758 [myid:] - INFO [main:ZooKeeper@853] - Initiating client connection, connectString=127.0.0.1:16608 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@68098843 2016-10-04 01:04:08,759 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16608. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:04:08,759 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:52866, server: 127.0.0.1/127.0.0.1:16608 2016-10-04 01:04:08,760 [myid:] - INFO [New I/O worker #68:ZooKeeperServer@995] - Client attempting to establish new session at /127.0.0.1:52866 2016-10-04 01:04:08,761 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-10-04 01:04:08,800 [myid:] - INFO [SyncThread:0:ZooKeeperServer@709] - Established session 0x10020f037d10000 with negotiated timeout 30000 for client /127.0.0.1:52866 2016-10-04 01:04:08,801 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:16608, sessionid = 0x10020f037d10000, negotiated timeout = 30000 2016-10-04 01:04:08,803 [myid:] - INFO [New I/O worker #68:ZooKeeperServer@1032] - got auth packet /127.0.0.1:52866 2016-10-04 01:04:08,803 [myid:] - INFO [New I/O worker #68:ZooKeeperServer@1050] - auth success /127.0.0.1:52866 2016-10-04 01:04:08,822 [myid:] - INFO [ProcessThread(sid:0 cport:16608)::PrepRequestProcessor@657] - Processed session termination for sessionid: 0x10020f037d10000 2016-10-04 01:04:08,830 [myid:] - INFO [SyncThread:0:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16608,name1=Connections,name2=127.0.0.1,name3=0x10020f037d10000] 2016-10-04 01:04:08,830 [myid:] - INFO [main:ZooKeeper@1311] - Session: 0x10020f037d10000 closed 2016-10-04 01:04:08,830 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10020f037d10000 2016-10-04 01:04:08,830 [myid:] - INFO [main:NettyServerCnxnFactory@464] - shutdown called 0.0.0.0/0.0.0.0:16608 2016-10-04 01:04:08,839 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-10-04 01:04:08,839 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:08,840 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-10-04 01:04:08,841 [myid:] - INFO [main:PrepRequestProcessor@975] - Shutting down 2016-10-04 01:04:08,841 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-10-04 01:04:08,841 [myid:] - INFO [ProcessThread(sid:0 cport:16608)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-10-04 01:04:08,843 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-10-04 01:04:08,844 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-10-04 01:04:08,844 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16608,name1=InMemoryDataTree] 2016-10-04 01:04:08,844 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16608] 2016-10-04 01:04:08,845 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16608 2016-10-04 01:04:08,845 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 67299 2016-10-04 01:04:08,845 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 10 2016-10-04 01:04:08,846 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testDisconnectedAddAuth 2016-10-04 01:04:08,846 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testDisconnectedAddAuth 2016-10-04 01:04:08,846 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testDisconnectedAddAuth 2016-10-04 01:04:08,847 [myid:] - INFO [main:PortAssignment@85] - Assigned port 16609 from range 16607 - 19299. 2016-10-04 01:04:08,847 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testAsyncMulti 2016-10-04 01:04:08,848 [myid:] - INFO [main:ClientBase@448] - Initial fdcount is: 60 2016-10-04 01:04:08,995 [myid:] - INFO [main:ClientBase@466] - STARTING server 2016-10-04 01:04:08,996 [myid:] - INFO [main:ClientBase@386] - CREATING server instance 127.0.0.1:16609 2016-10-04 01:04:09,004 [myid:] - INFO [main:ClientBase@361] - STARTING server instance 127.0.0.1:16609 2016-10-04 01:04:09,005 [myid:] - INFO [main:ZooKeeperServer@889] - minSessionTimeout set to 6000 2016-10-04 01:04:09,005 [myid:] - INFO [main:ZooKeeperServer@898] - maxSessionTimeout set to 60000 2016-10-04 01:04:09,005 [myid:] - INFO [main:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test484349696560279448.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test484349696560279448.junit.dir/version-2 2016-10-04 01:04:09,006 [myid:] - INFO [main:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:16609 2016-10-04 01:04:09,007 [myid:] - INFO [main:FileTxnSnapLog@306] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test484349696560279448.junit.dir/version-2/snapshot.0 2016-10-04 01:04:09,012 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:09,012 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16609 2016-10-04 01:04:09,013 [myid:] - INFO [New I/O worker #100:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:49194 2016-10-04 01:04:09,014 [myid:] - INFO [New I/O worker #100:StatCommand@49] - Stat command output 2016-10-04 01:04:09,014 [myid:] - INFO [main:JMXEnv@228] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-10-04 01:04:09,021 [myid:] - INFO [main:JMXEnv@245] - expect:InMemoryDataTree 2016-10-04 01:04:09,021 [myid:] - INFO [main:JMXEnv@249] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port16609,name1=InMemoryDataTree 2016-10-04 01:04:09,022 [myid:] - INFO [main:JMXEnv@245] - expect:StandaloneServer_port 2016-10-04 01:04:09,022 [myid:] - INFO [main:JMXEnv@249] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port16609 2016-10-04 01:04:09,022 [myid:] - INFO [main:ClientBase@462] - Client test setup finished 2016-10-04 01:04:09,022 [myid:] - INFO [main:AsyncOpsTest@50] - Creating client testAsyncMulti 2016-10-04 01:04:09,023 [myid:] - INFO [main:ZooKeeper@853] - Initiating client connection, connectString=127.0.0.1:16609 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@5a124d9f 2016-10-04 01:04:09,024 [myid:127.0.0.1:16609] - INFO [main-SendThread(127.0.0.1:16609):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16609. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:04:09,024 [myid:127.0.0.1:16609] - INFO [main-SendThread(127.0.0.1:16609):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:49195, server: 127.0.0.1/127.0.0.1:16609 2016-10-04 01:04:09,026 [myid:] - INFO [New I/O worker #101:ZooKeeperServer@995] - Client attempting to establish new session at /127.0.0.1:49195 2016-10-04 01:04:09,026 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-10-04 01:04:09,070 [myid:] - INFO [SyncThread:0:ZooKeeperServer@709] - Established session 0x10020f038cf0000 with negotiated timeout 30000 for client /127.0.0.1:49195 2016-10-04 01:04:09,070 [myid:127.0.0.1:16609] - INFO [main-SendThread(127.0.0.1:16609):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:16609, sessionid = 0x10020f038cf0000, negotiated timeout = 30000 2016-10-04 01:04:09,073 [myid:] - INFO [main:JMXEnv@117] - expect:0x10020f038cf0000 2016-10-04 01:04:09,073 [myid:] - INFO [main:JMXEnv@120] - found:0x10020f038cf0000 org.apache.ZooKeeperService:name0=StandaloneServer_port16609,name1=Connections,name2=127.0.0.1,name3=0x10020f038cf0000 2016-10-04 01:04:09,073 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testAsyncMulti 2016-10-04 01:04:09,074 [myid:] - INFO [New I/O worker #101:ZooKeeperServer@1032] - got auth packet /127.0.0.1:49195 2016-10-04 01:04:09,074 [myid:] - INFO [New I/O worker #101:ZooKeeperServer@1050] - auth success /127.0.0.1:49195 2016-10-04 01:04:09,102 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 33655 2016-10-04 01:04:09,102 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 55 2016-10-04 01:04:09,102 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testAsyncMulti 2016-10-04 01:04:09,103 [myid:] - INFO [ProcessThread(sid:0 cport:16609)::PrepRequestProcessor@657] - Processed session termination for sessionid: 0x10020f038cf0000 2016-10-04 01:04:09,116 [myid:] - INFO [SyncThread:0:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16609,name1=Connections,name2=127.0.0.1,name3=0x10020f038cf0000] 2016-10-04 01:04:09,118 [myid:] - WARN [New I/O worker #101:NettyServerCnxnFactory$CnxnChannelHandler@142] - Exception caught [id: 0xd523ab2f, /127.0.0.1:49195 :> /127.0.0.1:16609] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:479) at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-10-04 01:04:09,216 [myid:] - INFO [main:ZooKeeper@1311] - Session: 0x10020f038cf0000 closed 2016-10-04 01:04:09,217 [myid:] - INFO [main:ClientBase@543] - tearDown starting 2016-10-04 01:04:09,217 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10020f038cf0000 2016-10-04 01:04:09,217 [myid:] - INFO [main:ClientBase@513] - STOPPING server 2016-10-04 01:04:09,218 [myid:] - INFO [main:NettyServerCnxnFactory@464] - shutdown called 0.0.0.0/0.0.0.0:16609 2016-10-04 01:04:09,224 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-10-04 01:04:09,224 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:09,224 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-10-04 01:04:09,225 [myid:] - INFO [main:PrepRequestProcessor@975] - Shutting down 2016-10-04 01:04:09,225 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-10-04 01:04:09,225 [myid:] - INFO [ProcessThread(sid:0 cport:16609)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-10-04 01:04:09,225 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-10-04 01:04:09,226 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-10-04 01:04:09,226 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16609,name1=InMemoryDataTree] 2016-10-04 01:04:09,226 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16609] 2016-10-04 01:04:09,227 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16609 2016-10-04 01:04:09,228 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-10-04 01:04:09,232 [myid:] - INFO [main:ClientBase@568] - fdcount after test is: 59 at start it was 60 2016-10-04 01:04:09,233 [myid:] - INFO [main:AsyncOpsTest@63] - Test clients shutting down 2016-10-04 01:04:09,234 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testAsyncMulti 2016-10-04 01:04:09,234 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testAsyncMulti 2016-10-04 01:04:09,234 [myid:] - INFO [main:PortAssignment@85] - Assigned port 16610 from range 16607 - 19299. 2016-10-04 01:04:09,235 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testAsyncMultiSequential_NoSideEffect 2016-10-04 01:04:09,235 [myid:] - INFO [main:ClientBase@448] - Initial fdcount is: 59 2016-10-04 01:04:09,243 [myid:] - INFO [main:ClientBase@466] - STARTING server 2016-10-04 01:04:09,243 [myid:] - INFO [main:ClientBase@386] - CREATING server instance 127.0.0.1:16610 2016-10-04 01:04:09,250 [myid:] - INFO [main:ClientBase@361] - STARTING server instance 127.0.0.1:16610 2016-10-04 01:04:09,255 [myid:] - INFO [main:ZooKeeperServer@889] - minSessionTimeout set to 6000 2016-10-04 01:04:09,255 [myid:] - INFO [main:ZooKeeperServer@898] - maxSessionTimeout set to 60000 2016-10-04 01:04:09,255 [myid:] - INFO [main:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test7939377037580146364.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test7939377037580146364.junit.dir/version-2 2016-10-04 01:04:09,256 [myid:] - INFO [main:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:16610 2016-10-04 01:04:09,257 [myid:] - INFO [main:FileTxnSnapLog@306] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test7939377037580146364.junit.dir/version-2/snapshot.0 2016-10-04 01:04:09,258 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:09,258 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16610 2016-10-04 01:04:09,260 [myid:] - INFO [New I/O worker #133:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:34416 2016-10-04 01:04:09,260 [myid:] - INFO [New I/O worker #133:StatCommand@49] - Stat command output 2016-10-04 01:04:09,261 [myid:] - INFO [main:JMXEnv@228] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-10-04 01:04:09,262 [myid:] - INFO [main:JMXEnv@245] - expect:InMemoryDataTree 2016-10-04 01:04:09,263 [myid:] - INFO [main:JMXEnv@249] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port16610,name1=InMemoryDataTree 2016-10-04 01:04:09,263 [myid:] - INFO [main:JMXEnv@245] - expect:StandaloneServer_port 2016-10-04 01:04:09,263 [myid:] - INFO [main:JMXEnv@249] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port16610 2016-10-04 01:04:09,264 [myid:] - INFO [main:ClientBase@462] - Client test setup finished 2016-10-04 01:04:09,264 [myid:] - INFO [main:AsyncOpsTest@50] - Creating client testAsyncMultiSequential_NoSideEffect 2016-10-04 01:04:09,264 [myid:] - INFO [main:ZooKeeper@853] - Initiating client connection, connectString=127.0.0.1:16610 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@9a3da81 2016-10-04 01:04:09,266 [myid:127.0.0.1:16610] - INFO [main-SendThread(127.0.0.1:16610):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16610. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:04:09,266 [myid:127.0.0.1:16610] - INFO [main-SendThread(127.0.0.1:16610):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:34417, server: 127.0.0.1/127.0.0.1:16610 2016-10-04 01:04:09,267 [myid:] - INFO [New I/O worker #134:ZooKeeperServer@995] - Client attempting to establish new session at /127.0.0.1:34417 2016-10-04 01:04:09,267 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-10-04 01:04:09,305 [myid:] - INFO [SyncThread:0:ZooKeeperServer@709] - Established session 0x10020f039c90000 with negotiated timeout 30000 for client /127.0.0.1:34417 2016-10-04 01:04:09,305 [myid:127.0.0.1:16610] - INFO [main-SendThread(127.0.0.1:16610):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:16610, sessionid = 0x10020f039c90000, negotiated timeout = 30000 2016-10-04 01:04:09,308 [myid:] - INFO [main:JMXEnv@117] - expect:0x10020f039c90000 2016-10-04 01:04:09,308 [myid:] - INFO [main:JMXEnv@120] - found:0x10020f039c90000 org.apache.ZooKeeperService:name0=StandaloneServer_port16610,name1=Connections,name2=127.0.0.1,name3=0x10020f039c90000 2016-10-04 01:04:09,309 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testAsyncMultiSequential_NoSideEffect 2016-10-04 01:04:09,309 [myid:] - INFO [New I/O worker #134:ZooKeeperServer@1032] - got auth packet /127.0.0.1:34417 2016-10-04 01:04:09,310 [myid:] - INFO [New I/O worker #134:ZooKeeperServer@1050] - auth success /127.0.0.1:34417 2016-10-04 01:04:09,384 [myid:] - INFO [ProcessThread(sid:0 cport:16610)::PrepRequestProcessor@790] - Got user-level KeeperException when processing sessionid:0x10020f039c90000 type:multi cxid:0x4 zxid:0x4 txntype:2 reqpath:n/a aborting remaining multi ops. Error Path:/nonexist Error:KeeperErrorCode = NoNode for /nonexist 2016-10-04 01:04:09,430 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 62828 2016-10-04 01:04:09,430 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 56 2016-10-04 01:04:09,430 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testAsyncMultiSequential_NoSideEffect 2016-10-04 01:04:09,431 [myid:] - INFO [ProcessThread(sid:0 cport:16610)::PrepRequestProcessor@657] - Processed session termination for sessionid: 0x10020f039c90000 2016-10-04 01:04:09,454 [myid:] - WARN [New I/O worker #134:NettyServerCnxnFactory$CnxnChannelHandler@142] - Exception caught [id: 0x0743662e, /127.0.0.1:34417 => /127.0.0.1:16610] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:479) at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-10-04 01:04:09,454 [myid:] - INFO [SyncThread:0:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16610,name1=Connections,name2=127.0.0.1,name3=0x10020f039c90000] 2016-10-04 01:04:09,555 [myid:] - INFO [main:ZooKeeper@1311] - Session: 0x10020f039c90000 closed 2016-10-04 01:04:09,555 [myid:] - INFO [main:ClientBase@543] - tearDown starting 2016-10-04 01:04:09,555 [myid:] - INFO [main:ClientBase@513] - STOPPING server 2016-10-04 01:04:09,556 [myid:] - INFO [main:NettyServerCnxnFactory@464] - shutdown called 0.0.0.0/0.0.0.0:16610 2016-10-04 01:04:09,555 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10020f039c90000 2016-10-04 01:04:09,562 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-10-04 01:04:09,562 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:04:09,562 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-10-04 01:04:09,563 [myid:] - INFO [main:PrepRequestProcessor@975] - Shutting down 2016-10-04 01:04:09,563 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-10-04 01:04:09,563 [myid:] - INFO [ProcessThread(sid:0 cport:16610)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-10-04 01:0 ...[truncated 6640749 chars]... ting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:30,775 [myid:127.0.0.1:16699] - INFO [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16699. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:30,775 [myid:127.0.0.1:16699] - WARN [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1235] - Session 0x70020f2f1ed0001 for server 127.0.0.1/127.0.0.1:16699, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:30,826 [myid:127.0.0.1:16690] - INFO [main-SendThread(127.0.0.1:16690):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16690. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:30,826 [myid:127.0.0.1:16690] - WARN [main-SendThread(127.0.0.1:16690):ClientCnxn$SendThread@1235] - Session 0x40020f2e0da0001 for server 127.0.0.1/127.0.0.1:16690, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:30,912 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-10-04 01:14:30,913 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-10-04 01:14:30,913 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-10-04 01:14:30,981 [myid:127.0.0.1:16696] - INFO [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16696. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:30,982 [myid:127.0.0.1:16696] - WARN [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1235] - Session 0x60020f2ea050001 for server 127.0.0.1/127.0.0.1:16696, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,050 [myid:127.0.0.1:16681] - INFO [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16681. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,051 [myid:127.0.0.1:16681] - WARN [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1235] - Session 0x10020f2dfad0001 for server 127.0.0.1/127.0.0.1:16681, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,051 [myid:127.0.0.1:16687] - INFO [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16687. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,051 [myid:127.0.0.1:16687] - WARN [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1235] - Session 0x30020f2dfad0001 for server 127.0.0.1/127.0.0.1:16687, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,073 [myid:127.0.0.1:16693] - INFO [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16693. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,074 [myid:127.0.0.1:16693] - WARN [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1235] - Session 0x50020f2e0e30001 for server 127.0.0.1/127.0.0.1:16693, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,152 [myid:127.0.0.1:16729] - INFO [main-SendThread(127.0.0.1:16729):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16729. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,152 [myid:127.0.0.1:16729] - WARN [main-SendThread(127.0.0.1:16729):ClientCnxn$SendThread@1235] - Session 0x10020f685ea0000 for server 127.0.0.1/127.0.0.1:16729, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,190 [myid:127.0.0.1:16681] - INFO [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16681. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,190 [myid:127.0.0.1:16681] - WARN [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1235] - Session 0x10020f2dfad0000 for server 127.0.0.1/127.0.0.1:16681, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,315 [myid:] - INFO [main:ClientBase@466] - STARTING server 2016-10-04 01:14:31,315 [myid:] - INFO [main:ClientBase@386] - CREATING server instance 127.0.0.1:16852 2016-10-04 01:14:31,324 [myid:] - INFO [main:ClientBase@361] - STARTING server instance 127.0.0.1:16852 2016-10-04 01:14:31,324 [myid:] - INFO [main:ZooKeeperServer@889] - minSessionTimeout set to 6000 2016-10-04 01:14:31,324 [myid:] - INFO [main:ZooKeeperServer@898] - maxSessionTimeout set to 60000 2016-10-04 01:14:31,324 [myid:] - INFO [main:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test8970971002755443544.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test8970971002755443544.junit.dir/version-2 2016-10-04 01:14:31,324 [myid:] - INFO [main:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:16852 2016-10-04 01:14:31,325 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test8970971002755443544.junit.dir/version-2/snapshot.3 2016-10-04 01:14:31,326 [myid:] - INFO [main:FileTxnSnapLog@306] - Snapshotting: 0x5 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test8970971002755443544.junit.dir/version-2/snapshot.5 2016-10-04 01:14:31,327 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:14:31,328 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16852 2016-10-04 01:14:31,329 [myid:] - INFO [New I/O worker #6601:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:38348 2016-10-04 01:14:31,329 [myid:] - INFO [New I/O worker #6601:StatCommand@49] - Stat command output 2016-10-04 01:14:31,329 [myid:] - INFO [main:JMXEnv@228] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-10-04 01:14:31,331 [myid:] - INFO [main:JMXEnv@245] - expect:InMemoryDataTree 2016-10-04 01:14:31,331 [myid:] - INFO [main:JMXEnv@249] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port16852,name1=InMemoryDataTree 2016-10-04 01:14:31,331 [myid:] - INFO [main:JMXEnv@245] - expect:StandaloneServer_port 2016-10-04 01:14:31,331 [myid:] - INFO [main:JMXEnv@249] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port16852 2016-10-04 01:14:31,433 [myid:127.0.0.1:16699] - INFO [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16699. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,434 [myid:127.0.0.1:16699] - WARN [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1235] - Session 0x70020f2f1ed0000 for server 127.0.0.1/127.0.0.1:16699, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,554 [myid:127.0.0.1:16684] - INFO [main-SendThread(127.0.0.1:16684):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16684. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,554 [myid:127.0.0.1:16684] - WARN [main-SendThread(127.0.0.1:16684):ClientCnxn$SendThread@1235] - Session 0x20020f2dfad0000 for server 127.0.0.1/127.0.0.1:16684, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,673 [myid:127.0.0.1:16732] - INFO [main-SendThread(127.0.0.1:16732):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16732. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,674 [myid:127.0.0.1:16732] - WARN [main-SendThread(127.0.0.1:16732):ClientCnxn$SendThread@1235] - Session 0x20020f689b10000 for server 127.0.0.1/127.0.0.1:16732, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,865 [myid:127.0.0.1:16735] - INFO [main-SendThread(127.0.0.1:16735):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16735. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,865 [myid:127.0.0.1:16735] - WARN [main-SendThread(127.0.0.1:16735):ClientCnxn$SendThread@1235] - Session 0x30020f6861c0000 for server 127.0.0.1/127.0.0.1:16735, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,926 [myid:127.0.0.1:16684] - INFO [main-SendThread(127.0.0.1:16684):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16684. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,926 [myid:127.0.0.1:16684] - WARN [main-SendThread(127.0.0.1:16684):ClientCnxn$SendThread@1235] - Session 0x20020f2dfad0001 for server 127.0.0.1/127.0.0.1:16684, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:31,964 [myid:127.0.0.1:16693] - INFO [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16693. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:31,964 [myid:127.0.0.1:16693] - WARN [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1235] - Session 0x50020f2e0e30000 for server 127.0.0.1/127.0.0.1:16693, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,120 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16608. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,120 [myid:127.0.0.1:16608] - WARN [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1235] - Session 0x10020f0235b0000 for server 127.0.0.1/127.0.0.1:16608, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,158 [myid:127.0.0.1:16690] - INFO [main-SendThread(127.0.0.1:16690):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16690. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,158 [myid:127.0.0.1:16690] - WARN [main-SendThread(127.0.0.1:16690):ClientCnxn$SendThread@1235] - Session 0x40020f2e0da0000 for server 127.0.0.1/127.0.0.1:16690, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,201 [myid:127.0.0.1:16852] - INFO [main-SendThread(127.0.0.1:16852):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16852. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,201 [myid:127.0.0.1:16852] - INFO [main-SendThread(127.0.0.1:16852):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:38382, server: 127.0.0.1/127.0.0.1:16852 2016-10-04 01:14:32,202 [myid:] - INFO [New I/O worker #6602:ZooKeeperServer@1000] - Client attempting to renew session 0x10020f9ae620000 at /127.0.0.1:38382 2016-10-04 01:14:32,203 [myid:] - INFO [New I/O worker #6602:ZooKeeperServer@709] - Established session 0x10020f9ae620000 with negotiated timeout 6000 for client /127.0.0.1:38382 2016-10-04 01:14:32,203 [myid:127.0.0.1:16852] - INFO [main-SendThread(127.0.0.1:16852):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:16852, sessionid = 0x10020f9ae620000, negotiated timeout = 6000 2016-10-04 01:14:32,210 [myid:127.0.0.1:16687] - INFO [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16687. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,210 [myid:127.0.0.1:16687] - WARN [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1235] - Session 0x30020f2dfad0001 for server 127.0.0.1/127.0.0.1:16687, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,210 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.6 2016-10-04 01:14:32,248 [myid:] - INFO [main:ClientBase@513] - STOPPING server 2016-10-04 01:14:32,248 [myid:] - INFO [main:NettyServerCnxnFactory@464] - shutdown called 0.0.0.0/0.0.0.0:16852 2016-10-04 01:14:32,248 [myid:127.0.0.1:16852] - INFO [main-SendThread(127.0.0.1:16852):ClientCnxn$SendThread@1231] - Unable to read additional data from server sessionid 0x10020f9ae620000, likely server has closed socket, closing socket connection and attempting reconnect 2016-10-04 01:14:32,248 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16852,name1=Connections,name2=127.0.0.1,name3=0x10020f9ae620000] 2016-10-04 01:14:32,254 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-10-04 01:14:32,254 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:14:32,254 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-10-04 01:14:32,254 [myid:] - INFO [main:PrepRequestProcessor@975] - Shutting down 2016-10-04 01:14:32,254 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-10-04 01:14:32,254 [myid:] - INFO [ProcessThread(sid:0 cport:16852)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-10-04 01:14:32,255 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-10-04 01:14:32,255 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-10-04 01:14:32,255 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16852,name1=InMemoryDataTree] 2016-10-04 01:14:32,255 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16852] 2016-10-04 01:14:32,256 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16852 2016-10-04 01:14:32,256 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-10-04 01:14:32,349 [myid:] - INFO [main:ClientBase@466] - STARTING server 2016-10-04 01:14:32,349 [myid:] - INFO [main:ClientBase@386] - CREATING server instance 127.0.0.1:16852 2016-10-04 01:14:32,356 [myid:] - INFO [main:ClientBase@361] - STARTING server instance 127.0.0.1:16852 2016-10-04 01:14:32,356 [myid:] - INFO [main:ZooKeeperServer@889] - minSessionTimeout set to 6000 2016-10-04 01:14:32,356 [myid:] - INFO [main:ZooKeeperServer@898] - maxSessionTimeout set to 60000 2016-10-04 01:14:32,356 [myid:] - INFO [main:ZooKeeperServer@159] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test8970971002755443544.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test8970971002755443544.junit.dir/version-2 2016-10-04 01:14:32,356 [myid:] - INFO [main:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:16852 2016-10-04 01:14:32,357 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test8970971002755443544.junit.dir/version-2/snapshot.5 2016-10-04 01:14:32,358 [myid:] - INFO [main:FileTxnSnapLog@306] - Snapshotting: 0x6 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/build/test/tmp/test8970971002755443544.junit.dir/version-2/snapshot.6 2016-10-04 01:14:32,359 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:14:32,359 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16852 2016-10-04 01:14:32,360 [myid:] - INFO [New I/O worker #6634:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:38388 2016-10-04 01:14:32,360 [myid:] - INFO [New I/O worker #6634:StatCommand@49] - Stat command output 2016-10-04 01:14:32,361 [myid:] - INFO [main:JMXEnv@228] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-10-04 01:14:32,362 [myid:] - INFO [main:JMXEnv@245] - expect:InMemoryDataTree 2016-10-04 01:14:32,362 [myid:] - INFO [main:JMXEnv@249] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port16852,name1=InMemoryDataTree 2016-10-04 01:14:32,362 [myid:] - INFO [main:JMXEnv@245] - expect:StandaloneServer_port 2016-10-04 01:14:32,362 [myid:] - INFO [main:JMXEnv@249] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port16852 2016-10-04 01:14:32,422 [myid:127.0.0.1:16687] - INFO [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16687. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,422 [myid:127.0.0.1:16687] - WARN [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1235] - Session 0x30020f2dfad0000 for server 127.0.0.1/127.0.0.1:16687, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,527 [myid:127.0.0.1:16696] - INFO [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16696. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,528 [myid:127.0.0.1:16696] - WARN [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1235] - Session 0x60020f2ea050001 for server 127.0.0.1/127.0.0.1:16696, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,580 [myid:127.0.0.1:16699] - INFO [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16699. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,581 [myid:127.0.0.1:16699] - WARN [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1235] - Session 0x70020f2f1ed0000 for server 127.0.0.1/127.0.0.1:16699, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,718 [myid:127.0.0.1:16699] - INFO [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16699. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,718 [myid:127.0.0.1:16699] - WARN [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1235] - Session 0x70020f2f1ed0001 for server 127.0.0.1/127.0.0.1:16699, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,737 [myid:127.0.0.1:16696] - INFO [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16696. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,737 [myid:127.0.0.1:16696] - WARN [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1235] - Session 0x60020f2ea050000 for server 127.0.0.1/127.0.0.1:16696, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,791 [myid:127.0.0.1:16690] - INFO [main-SendThread(127.0.0.1:16690):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16690. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,791 [myid:127.0.0.1:16690] - WARN [main-SendThread(127.0.0.1:16690):ClientCnxn$SendThread@1235] - Session 0x40020f2e0da0001 for server 127.0.0.1/127.0.0.1:16690, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,853 [myid:127.0.0.1:16729] - INFO [main-SendThread(127.0.0.1:16729):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16729. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,853 [myid:127.0.0.1:16729] - WARN [main-SendThread(127.0.0.1:16729):ClientCnxn$SendThread@1235] - Session 0x10020f685ea0000 for server 127.0.0.1/127.0.0.1:16729, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,913 [myid:127.0.0.1:16681] - INFO [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16681. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,913 [myid:127.0.0.1:16681] - WARN [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1235] - Session 0x10020f2dfad0000 for server 127.0.0.1/127.0.0.1:16681, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:32,975 [myid:127.0.0.1:16684] - INFO [main-SendThread(127.0.0.1:16684):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16684. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:32,975 [myid:127.0.0.1:16684] - WARN [main-SendThread(127.0.0.1:16684):ClientCnxn$SendThread@1235] - Session 0x20020f2dfad0000 for server 127.0.0.1/127.0.0.1:16684, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,037 [myid:127.0.0.1:16684] - INFO [main-SendThread(127.0.0.1:16684):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16684. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,037 [myid:127.0.0.1:16684] - WARN [main-SendThread(127.0.0.1:16684):ClientCnxn$SendThread@1235] - Session 0x20020f2dfad0001 for server 127.0.0.1/127.0.0.1:16684, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,077 [myid:127.0.0.1:16681] - INFO [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16681. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,078 [myid:127.0.0.1:16681] - WARN [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1235] - Session 0x10020f2dfad0001 for server 127.0.0.1/127.0.0.1:16681, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,113 [myid:127.0.0.1:16693] - INFO [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16693. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,114 [myid:127.0.0.1:16693] - WARN [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1235] - Session 0x50020f2e0e30001 for server 127.0.0.1/127.0.0.1:16693, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,128 [myid:127.0.0.1:16693] - INFO [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16693. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,130 [myid:127.0.0.1:16693] - WARN [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1235] - Session 0x50020f2e0e30000 for server 127.0.0.1/127.0.0.1:16693, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,133 [myid:127.0.0.1:16732] - INFO [main-SendThread(127.0.0.1:16732):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16732. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,134 [myid:127.0.0.1:16732] - WARN [main-SendThread(127.0.0.1:16732):ClientCnxn$SendThread@1235] - Session 0x20020f689b10000 for server 127.0.0.1/127.0.0.1:16732, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,361 [myid:127.0.0.1:16852] - INFO [main-SendThread(127.0.0.1:16852):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16852. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,362 [myid:127.0.0.1:16852] - INFO [main-SendThread(127.0.0.1:16852):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:38420, server: 127.0.0.1/127.0.0.1:16852 2016-10-04 01:14:33,362 [myid:] - INFO [New I/O worker #6635:ZooKeeperServer@1000] - Client attempting to renew session 0x10020f9ae620000 at /127.0.0.1:38420 2016-10-04 01:14:33,363 [myid:] - INFO [New I/O worker #6635:ZooKeeperServer@709] - Established session 0x10020f9ae620000 with negotiated timeout 6000 for client /127.0.0.1:38420 2016-10-04 01:14:33,363 [myid:127.0.0.1:16852] - INFO [main-SendThread(127.0.0.1:16852):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:16852, sessionid = 0x10020f9ae620000, negotiated timeout = 6000 2016-10-04 01:14:33,368 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.7 2016-10-04 01:14:33,548 [myid:127.0.0.1:16608] - INFO [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16608. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,548 [myid:127.0.0.1:16608] - WARN [main-SendThread(127.0.0.1:16608):ClientCnxn$SendThread@1235] - Session 0x10020f0235b0000 for server 127.0.0.1/127.0.0.1:16608, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,724 [myid:127.0.0.1:16699] - INFO [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16699. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,724 [myid:127.0.0.1:16699] - WARN [main-SendThread(127.0.0.1:16699):ClientCnxn$SendThread@1235] - Session 0x70020f2f1ed0000 for server 127.0.0.1/127.0.0.1:16699, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,875 [myid:127.0.0.1:16690] - INFO [main-SendThread(127.0.0.1:16690):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16690. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,875 [myid:127.0.0.1:16690] - WARN [main-SendThread(127.0.0.1:16690):ClientCnxn$SendThread@1235] - Session 0x40020f2e0da0000 for server 127.0.0.1/127.0.0.1:16690, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,881 [myid:127.0.0.1:16735] - INFO [main-SendThread(127.0.0.1:16735):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16735. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,881 [myid:127.0.0.1:16735] - WARN [main-SendThread(127.0.0.1:16735):ClientCnxn$SendThread@1235] - Session 0x30020f6861c0000 for server 127.0.0.1/127.0.0.1:16735, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,899 [myid:127.0.0.1:16696] - INFO [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16696. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,899 [myid:127.0.0.1:16696] - WARN [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1235] - Session 0x60020f2ea050000 for server 127.0.0.1/127.0.0.1:16696, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:33,912 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-10-04 01:14:33,929 [myid:127.0.0.1:16696] - INFO [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16696. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:33,930 [myid:127.0.0.1:16696] - WARN [main-SendThread(127.0.0.1:16696):ClientCnxn$SendThread@1235] - Session 0x60020f2ea050001 for server 127.0.0.1/127.0.0.1:16696, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:34,202 [myid:127.0.0.1:16687] - INFO [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16687. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:34,202 [myid:127.0.0.1:16687] - WARN [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1235] - Session 0x30020f2dfad0000 for server 127.0.0.1/127.0.0.1:16687, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:34,233 [myid:127.0.0.1:16687] - INFO [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16687. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:34,233 [myid:127.0.0.1:16687] - WARN [main-SendThread(127.0.0.1:16687):ClientCnxn$SendThread@1235] - Session 0x30020f2dfad0001 for server 127.0.0.1/127.0.0.1:16687, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:34,240 [myid:127.0.0.1:16693] - INFO [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16693. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:34,241 [myid:127.0.0.1:16693] - WARN [main-SendThread(127.0.0.1:16693):ClientCnxn$SendThread@1235] - Session 0x50020f2e0e30000 for server 127.0.0.1/127.0.0.1:16693, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:34,349 [myid:127.0.0.1:16681] - INFO [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16681. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:34,350 [myid:127.0.0.1:16681] - WARN [main-SendThread(127.0.0.1:16681):ClientCnxn$SendThread@1235] - Session 0x10020f2dfad0001 for server 127.0.0.1/127.0.0.1:16681, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:34,424 [myid:] - INFO [ProcessThread(sid:0 cport:16852)::PrepRequestProcessor@657] - Processed session termination for sessionid: 0x10020f9ae620000 2016-10-04 01:14:34,432 [myid:] - WARN [New I/O worker #6635:NettyServerCnxnFactory$CnxnChannelHandler@142] - Exception caught [id: 0xfb034ea1, /127.0.0.1:38420 :> /127.0.0.1:16852] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:479) at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-10-04 01:14:34,432 [myid:] - INFO [SyncThread:0:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16852,name1=Connections,name2=127.0.0.1,name3=0x10020f9ae620000] 2016-10-04 01:14:34,473 [myid:127.0.0.1:16729] - INFO [main-SendThread(127.0.0.1:16729):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:16729. Will not attempt to authenticate using SASL (unknown error) 2016-10-04 01:14:34,474 [myid:127.0.0.1:16729] - WARN [main-SendThread(127.0.0.1:16729):ClientCnxn$SendThread@1235] - Session 0x10020f685ea0000 for server 127.0.0.1/127.0.0.1:16729, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-10-04 01:14:34,532 [myid:] - INFO [main:ZooKeeper@1311] - Session: 0x10020f9ae620000 closed 2016-10-04 01:14:34,532 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 187485 2016-10-04 01:14:34,532 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 1671 2016-10-04 01:14:34,533 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testWatcherAutoResetWithLocal 2016-10-04 01:14:34,533 [myid:] - INFO [main:ClientBase@543] - tearDown starting 2016-10-04 01:14:34,533 [myid:] - INFO [main:ClientBase@513] - STOPPING server 2016-10-04 01:14:34,533 [myid:] - INFO [main:NettyServerCnxnFactory@464] - shutdown called 0.0.0.0/0.0.0.0:16852 2016-10-04 01:14:34,532 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10020f9ae620000 2016-10-04 01:14:34,538 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-10-04 01:14:34,539 [myid:] - ERROR [main:ZooKeeperServer@501] - ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2016-10-04 01:14:34,539 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-10-04 01:14:34,539 [myid:] - INFO [main:PrepRequestProcessor@975] - Shutting down 2016-10-04 01:14:34,539 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-10-04 01:14:34,539 [myid:] - INFO [ProcessThread(sid:0 cport:16852)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-10-04 01:14:34,539 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-10-04 01:14:34,539 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-10-04 01:14:34,540 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16852,name1=InMemoryDataTree] 2016-10-04 01:14:34,540 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port16852] 2016-10-04 01:14:34,540 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 16852 2016-10-04 01:14:34,541 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-10-04 01:14:34,547 [myid:] - INFO [main:ClientBase@568] - fdcount after test is: 4887 at start it was 4887 2016-10-04 01:14:34,547 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testWatcherAutoResetWithLocal 2016-10-04 01:14:34,547 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testWatcherAutoResetWithLocal {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 21 weeks ago | 0|i34mg7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2609 | ZOOKEEPER-2169 Add TTL Node APIs to C client |
Sub-task | Closed | Major | Fixed | Balazs Meszaros | Jordan Zimmerman | Jordan Zimmerman | 08/Oct/16 12:59 | 16/Oct/19 14:59 | 09/Apr/19 05:39 | 3.6.0, 3.5.6 | c client, java client, jute, server | 0 | 2 | 0 | 1800 | ZOOKEEPER-2168, ZOOKEEPER-2543 | Need to update the C lib to have the TTL node option | 100% | 100% | 1800 | 0 | pull-request-available, ttl_nodes | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 49 weeks, 2 days ago | 0|i34mcn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2608 | ZOOKEEPER-2169 Create CLI option for TTL ephemerals |
Sub-task | Resolved | Major | Fixed | Jordan Zimmerman | Camille Fournier | Camille Fournier | 07/Oct/16 15:40 | 21/Jan/19 08:41 | 23/Mar/17 13:47 | 3.6.0 | c client, java client, jute, server | 0 | 5 | Need to update CreateCommand to have the TTL node option | ttl_nodes | 9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years ago | 0|i34ldr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2607 | doxygen-related, ./configure fails |
Bug | Open | Major | Unresolved | Unassigned | HIthu Anand | HIthu Anand | 07/Oct/16 03:17 | 07/Oct/16 03:24 | 0 | 2 | Ubuntu14.04 linux 4.4.0-38-generic #57~14.04.1-Ubuntu SMP Tue Sep 6 17:20:43 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | hithu@linux:~/opt/gridlabd-src-2_0_2363$ autoreconf -isf configure.ac:95: error: AC_SUBST: `DX_FLAG_[]DX_CURRENT_FEATURE' is not a valid shell variable name m4/dx_doxygen.m4:77: DX_REQUIRE_PROG is expanded from... m4/dx_doxygen.m4:117: DX_ARG_ABLE is expanded from... m4/dx_doxygen.m4:178: DX_INIT_DOXYGEN is expanded from... configure.ac:95: the top level autom4te: /usr/bin/m4 failed with exit status: 1 aclocal: error: echo failed with exit status: 1 autoreconf: aclocal failed with exit status: 1 hithu@linux:~$ doxygen --version 1.8.6 hithu@linux:~$ autoconf --version autoconf (GNU Autoconf) 2.69 hithu@linux:~$ automake --version automake (GNU automake) 1.14.1 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 23 weeks, 6 days ago | 0|i34k5j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2606 | SaslServerCallbackHandler#handleAuthorizeCallback() should log the exception |
Bug | Closed | Minor | Fixed | Ted Yu | Ted Yu | Ted Yu | 04/Oct/16 09:00 | 31/Mar/17 05:01 | 17/Oct/16 04:48 | 3.4.10, 3.5.3, 3.6.0 | 0 | 4 | {code} LOG.info("Setting authorizedID: " + userNameBuilder); ac.setAuthorizedID(userNameBuilder.toString()); } catch (IOException e) { LOG.error("Failed to set name based on Kerberos authentication rules."); } {code} On one cluster, we saw the following: {code} 2016-10-04 02:18:16,484 - ERROR [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler@137] - Failed to set name based on Kerberos authentication rules. {code} It would be helpful if the log contains information about the IOException. |
security | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 22 weeks, 3 days ago | 0|i34exr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2605 | Snapshot generation fills up disk space due to high volume of requests. |
Bug | Resolved | Minor | Not A Problem | Unassigned | Joe Wang | Joe Wang | 29/Sep/16 14:55 | 29/Sep/16 15:20 | 29/Sep/16 15:20 | 3.4.5 | 0 | 2 | Not sure if it's a bug, or just a consequence of a design decision. Recently we had an issue where faulty clients were issuing create requests at an abnormally high rate, which caused zookeeper to generate more snapshots than our cron job could clean up. This filled up the disk on our zookeeper hosts and brought the cluster down. Is there a reason why Zookeeper uses a write-ahead log instead only flushing successful transactions to disk? If only successful transactions are flushed and counted towards snapCount, then even if a client is spamming requests to create a node that already exists, it wouldn't cause a flood of snapshots to be persisted to disk. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 25 weeks ago | 0|i349mf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2604 | Temporary node has not been deleted |
Bug | Open | Major | Unresolved | Unassigned | sunqb | sunqb | 27/Sep/16 07:23 | 14/Mar/18 03:22 | 3.4.6 | 3.4.6 | 0 | 3 | linux centos , jdk7 | i use zkclient to connect zookeeper-server , but sometime when i close my zkclient ,the Temporary node can't be deleted。i have search this bug,and i fixed it by delete the data dir | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 1 week, 1 day ago | 0|i344s7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2603 | Init script restart command broken on Debian Jessie |
Bug | Open | Minor | Unresolved | Unassigned | Mateusz Moneta | Mateusz Moneta | 26/Sep/16 06:48 | 26/Sep/16 06:48 | 3.4.6, 3.4.9 | 0 | 2 | Hello, When you try to restart zookeeper via {{service zookeepr restart}} it ends in state: {noformat} root@m1:/etc/systemd# service zookeeper status ● zookeeper.service - LSB: Apache ZooKeeper server Loaded: loaded (/etc/init.d/zookeeper) Active: active (exited) since Mon 2016-09-26 10:38:47 UTC; 58s ago Process: 55495 ExecStop=/etc/init.d/zookeeper stop (code=exited, status=0/SUCCESS) Process: 55504 ExecStart=/etc/init.d/zookeeper start (code=exited, status=0/SUCCESS) {noformat} After that {{service zookeeper start}} won't work. Only way is to do {{service zookeeper stop}} and then {{start}} or {{service zookeepr --full-restart}} (which does basically the same). |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 25 weeks, 3 days ago | 0|i342kv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2600 | dangling ephemerals on overloaded server with local sessions |
Bug | Resolved | Major | Cannot Reproduce | Unassigned | Benjamin Reed | Benjamin Reed | 22/Sep/16 17:38 | 24/Sep/16 11:46 | 24/Sep/16 05:36 | quorum | 0 | 4 | we had the following strange production bug: there was an ephemeral znode for a session that was no longer active. it happened even in the absence of failures. we are running with local sessions enabled and slightly different logic than the open source zookeeper, but code inspection shows that the problem is also in open source. the triggering condition was server overload. we had a traffic burst and it we were having commit latencies of over 30 seconds. after digging through logs/code we realized from the logs that the create session txn for the ephemeral node started (in the PrepRequestProcessor) at 11:23:04 and committed at 11:23:38 (the "Adding global session" is output in the commit processor). it took 34 seconds to commit the createSession, during that time the session expired. due to delays it appears that the interleave was as follows: 1) create session hits prep request processor and create session txn generated 11:23:04 2) time passes as the create session is going through zab 3) the session expires, close session is generated, and close session txn generated 11:23:23 4) the create session gets committed and the session gets re-added to the sessionTracker 11:23:38 5) the create ephemeral node hits prep request processor and a create txn generated 11:23:40 6) the close session gets committed (all ephemeral nodes for the session are deleted) and the session is deleted from sessionTracker 7) the create ephemeral node gets committed the root cause seems to be that the gobal sessions are managed by both the PrepRequestProcessor and the CommitProcessor. also with the local session upgrading we can have changes in flight before our sessions commits. i think there are probably two places to fix: 1) changes to session tracker should not happen in prep request processor. 2) we should not have requests in flight while create session is in process. there are two options to prevent this: a) when a create session is generated in makeUpgradeRequest, we need to start queuing the requests from the clients and only submit them once the create session is committed b) the client should explicitly detect that it needs to change from local session to global session and explicitly open a global session and get the commit before it sends an ephemeral create request option 2a) is a more transparent fix, but architecturally and in the long term i think 2b) might be better. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 25 weeks, 5 days ago | 0|i33z5r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2599 | Quorum with 3 nodes, stop 2 nodes and let one running, install and configure new quorum where one node details is common but now has configuration of new quorum. common node getting synced the configuration with previous quorum |
Bug | Open | Critical | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 22/Sep/16 10:36 | 22/Sep/16 10:36 | 3.5.1 | 0 | 3 | Start an quorum with 3 (let say A, B, C) zookeepers, stop 2 zookeepers and let one running, install and configure new quorum (A, A2, A3, A4, A5) where A is common but now has configuration of new quorum. When start A, it getting synced the configuration with previous quorum Steps to reproduce:- 1. Configure and start quorum of 3 nodes (A, B, C) -> 1st quorum 2. stop 2 nodes and let running 3rd node (say C) 3. Create new quorum of 5 nodes (A, A2, A3, A4, A5) where A has same IP and port which was used in 1st quorum but A's configuration is as per new quorum (where details of A, A2, A3, A4, A5) are present and not B & C. 4. Now start 2nd quorum. Here A's dynamic configuration is getting changed according to 1st quorum Problems:- 1. Now A node is neither syncing all data with 1st quorum nor with 2nd quorum 2. Big security flaw and the whole quorum can be screwed |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks ago | 0|i33yev: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2598 | Data Inconsistency after power off/on of some nodes |
Bug | Open | Major | Unresolved | Unassigned | Srinivas Neginhal | Srinivas Neginhal | 21/Sep/16 14:01 | 17/Oct/16 06:46 | 3.5.1 | quorum | 0 | 4 | ZK is running in a docker container on a Ubuntu 14.04 VM | Steps to reproduce: 1. Create a three node cluster: Node1, Node2 and Node3. Each node is a VM that runs: 1. ZK in a docker container 2. Two clients, A and B that use ZK for group membership and leader election. The clients create sequential ephemeral nodes when they come up. 2. The three ZK's running in the containers form an ensemble. 3. Power off/on Node 2 and Node 3 in a loop 4. After a few times, the ephemeral nodes seen by the three nodes are different. Here is the output of some four letter commands with the ensemble in the state: 1. conf: ZK 1: # echo conf| nc 10.0.0.1 1300 clientPort=1300 secureClientPort=-1 dataDir=/moot/persistentStore/zkWorkspace/version-2 dataDirSize=67293721 dataLogDir=/moot/persistentStore/zkWorkspace/version-2 dataLogSize=67293721 tickTime=2000 maxClientCnxns=60 minSessionTimeout=4000 maxSessionTimeout=40000 serverId=1 initLimit=100 syncLimit=20 electionAlg=3 electionPort=1200 quorumPort=1100 peerType=0 membership: server.1=10.0.0.1:1100:1200:participant;10.0.0.1:1300;8e64c644-d0fa-414f-bab2-3c8c80364410 server.2=10.0.0.2:1100:1200:participant;10.0.0.2:1300;38bf19b8-d4cb-4dac-b328-7bbf0ee1e2c4 server.3=10.0.0.3:1100:1200:participant;10.0.0.3:1300;e1415d59-e857-43e6-ba9b-01daeb31a434 ZK 2: # echo conf| nc 10.0.0.2 1300 clientPort=1300 secureClientPort=-1 dataDir=/moot/persistentStore/zkWorkspace/version-2 dataDirSize=1409480873 dataLogDir=/moot/persistentStore/zkWorkspace/version-2 dataLogSize=1409480873 tickTime=2000 maxClientCnxns=60 minSessionTimeout=4000 maxSessionTimeout=40000 serverId=2 initLimit=100 syncLimit=20 electionAlg=3 electionPort=1200 quorumPort=1100 peerType=0 membership: server.1=10.0.0.1:1100:1200:participant;10.0.0.1:1300;8e64c644-d0fa-414f-bab2-3c8c80364410 server.2=10.0.0.2:1100:1200:participant;10.0.0.2:1300;38bf19b8-d4cb-4dac-b328-7bbf0ee1e2c4 server.3=10.0.0.3:1100:1200:participant;10.0.0.3:1300;e1415d59-e857-43e6-ba9b-01daeb31a434 ZK 3: # echo conf| nc 10.0.0.3 1300 clientPort=1300 secureClientPort=-1 dataDir=/moot/persistentStore/zkWorkspace/version-2 dataDirSize=1409505467 dataLogDir=/moot/persistentStore/zkWorkspace/version-2 dataLogSize=1409505467 tickTime=2000 maxClientCnxns=60 minSessionTimeout=4000 maxSessionTimeout=40000 serverId=3 initLimit=100 syncLimit=20 electionAlg=3 electionPort=1200 quorumPort=1100 peerType=0 membership: server.1=10.0.0.1:1100:1200:participant;10.0.0.1:1300;8e64c644-d0fa-414f-bab2-3c8c80364410 server.2=10.0.0.2:1100:1200:participant;10.0.0.2:1300;38bf19b8-d4cb-4dac-b328-7bbf0ee1e2c4 server.3=10.0.0.3:1100:1200:participant;10.0.0.3:1300;e1415d59-e857-43e6-ba9b-01daeb31a434 2. mntr: ZK 1: # echo mntr| nc 10.0.0.1 1300 zk_version 3.5.1-alpha--1, built on 09/07/2016 00:34 GMT zk_avg_latency 0 zk_max_latency 471 zk_min_latency 0 zk_packets_received 32556 zk_packets_sent 32564 zk_num_alive_connections 7 zk_outstanding_requests 0 zk_server_state leader zk_znode_count 58 zk_watch_count 51 zk_ephemerals_count 5 zk_approximate_data_size 5251 zk_open_file_descriptor_count 52 zk_max_file_descriptor_count 1048576 zk_followers 2 zk_synced_followers 2 zk_pending_syncs 0 ZK 2: # echo mntr| nc 10.0.0.2 1300 zk_version 3.5.1-alpha--1, built on 09/07/2016 00:34 GMT zk_avg_latency 1 zk_max_latency 227 zk_min_latency 0 zk_packets_received 30905 zk_packets_sent 30936 zk_num_alive_connections 6 zk_outstanding_requests 0 zk_server_state follower zk_znode_count 58 zk_watch_count 82 zk_ephemerals_count 5 zk_approximate_data_size 5251 zk_open_file_descriptor_count 49 zk_max_file_descriptor_count 1048576 ZK 3: # echo mntr| nc 10.0.0.3 1300 zk_version 3.5.1-alpha--1, built on 09/07/2016 00:34 GMT zk_avg_latency 4 zk_max_latency 590 zk_min_latency 0 zk_packets_received 6192 zk_packets_sent 6191 zk_num_alive_connections 2 zk_outstanding_requests 0 zk_server_state follower zk_znode_count 64 zk_watch_count 17 zk_ephemerals_count 11 zk_approximate_data_size 5806 zk_open_file_descriptor_count 45 zk_max_file_descriptor_count 1048576 3. dump showing the inconsistency: ZK 1: # echo dump| nc 10.0.0.1 1300 SessionTracker dump: Session Sets (17)/(12): 0 expire at Tue Sep 20 18:22:35 UTC 2016: 0 expire at Tue Sep 20 18:22:37 UTC 2016: 0 expire at Tue Sep 20 18:22:39 UTC 2016: 0 expire at Tue Sep 20 18:22:41 UTC 2016: 0 expire at Tue Sep 20 18:22:43 UTC 2016: 0 expire at Tue Sep 20 18:22:45 UTC 2016: 0 expire at Tue Sep 20 18:22:49 UTC 2016: 0 expire at Tue Sep 20 18:22:51 UTC 2016: 0 expire at Tue Sep 20 18:22:53 UTC 2016: 0 expire at Tue Sep 20 18:22:55 UTC 2016: 0 expire at Tue Sep 20 18:22:57 UTC 2016: 4 expire at Tue Sep 20 18:22:59 UTC 2016: 0x100061435f7000d 0x10000d9e4460004 0x100061435f70002 0x10000d9e4460003 4 expire at Tue Sep 20 18:23:03 UTC 2016: 0x2000001141a0002 0x2000001141a0000 0x2000001141a0005 0x100061435f70010 1 expire at Tue Sep 20 18:23:07 UTC 2016: 0x2000001141a0001 1 expire at Tue Sep 20 18:23:09 UTC 2016: 0x100061435f70000 1 expire at Tue Sep 20 18:23:11 UTC 2016: 0x2000001141a000f 1 expire at Tue Sep 20 18:23:13 UTC 2016: 0x300000188c30001 ephemeral nodes dump: Sessions with Ephemerals (5): 0x100061435f70000: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000064 0x2000001141a000f: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000066 0x2000001141a0001: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000065 0x2000001141a0000: /moot/gmle/ActiveControllerCluster/member0000000065 0x2000001141a0005: /moot/gmle/ActiveControllerCluster/member0000000066 Connections dump: Connections Sets (5)/(10): 0 expire at Tue Sep 20 18:22:35 UTC 2016: 1 expire at Tue Sep 20 18:22:45 UTC 2016: ip: /10.0.0.1:45591 sessionId: 0x0 0 expire at Tue Sep 20 18:22:55 UTC 2016: 5 expire at Tue Sep 20 18:23:05 UTC 2016: ip: /10.0.0.3:34734 sessionId: 0x100061435f7000d ip: /10.0.0.1:42963 sessionId: 0x10000d9e4460003 ip: /10.0.0.3:34739 sessionId: 0x100061435f70010 ip: /10.0.0.2:45750 sessionId: 0x100061435f70002 ip: /10.0.0.1:42961 sessionId: 0x10000d9e4460004 1 expire at Tue Sep 20 18:23:15 UTC 2016: ip: /10.0.0.1:42964 sessionId: 0x100061435f70000 ZK 2: # echo dump| nc 10.0.0.2 1300 SessionTracker dump: Global Sessions(13): 0x10000d9e4460003 30000ms 0x10000d9e4460004 30000ms 0x100061435f70000 40000ms 0x100061435f70002 30000ms 0x100061435f7000d 30000ms 0x100061435f70010 30000ms 0x100061435f70584 4000ms 0x2000001141a0000 40000ms 0x2000001141a0001 40000ms 0x2000001141a0002 30000ms 0x2000001141a0005 40000ms 0x2000001141a000f 40000ms 0x300000188c30001 40000ms ephemeral nodes dump: Sessions with Ephemerals (5): 0x100061435f70000: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000064 0x2000001141a000f: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000066 0x2000001141a0001: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000065 0x2000001141a0000: /moot/gmle/ActiveControllerCluster/member0000000065 0x2000001141a0005: /moot/gmle/ActiveControllerCluster/member0000000066 Connections dump: Connections Sets (4)/(6): 0 expire at Tue Sep 20 18:25:13 UTC 2016: 1 expire at Tue Sep 20 18:25:23 UTC 2016: ip: /10.0.0.2:38021 sessionId: 0x0 1 expire at Tue Sep 20 18:25:33 UTC 2016: ip: /10.0.0.2:35422 sessionId: 0x2000001141a0002 4 expire at Tue Sep 20 18:25:43 UTC 2016: ip: /10.0.0.2:35419 sessionId: 0x2000001141a0001 ip: /10.0.0.1:59025 sessionId: 0x2000001141a0000 ip: /10.0.0.2:35427 sessionId: 0x2000001141a0005 ip: /10.0.0.3:56967 sessionId: 0x2000001141a000f ZK 3: # echo dump| nc 10.0.0.3 1300 SessionTracker dump: Global Sessions(23): 0x10000d9e4460003 30000ms 0x10000d9e4460004 30000ms 0x100055a50b00001 30000ms 0x100055a50b00003 40000ms 0x100055a50b0000c 40000ms 0x100061435f70000 40000ms 0x100061435f70002 30000ms 0x100061435f7000d 30000ms 0x100061435f70010 30000ms 0x100061435f70585 4000ms 0x2000001141a0000 40000ms 0x2000001141a0001 40000ms 0x2000001141a0002 30000ms 0x2000001141a0005 40000ms 0x2000001141a000f 40000ms 0x200000130750000 40000ms 0x200000130750001 40000ms 0x200000130750002 30000ms 0x200000130750004 40000ms 0x20000013075000d 30000ms 0x3000000e4860000 30000ms 0x3000000e4860002 40000ms 0x300000188c30001 40000ms ephemeral nodes dump: Sessions with Ephemerals (11): 0x100061435f70000: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000064 0x3000000e4860002: /moot/gmle/ActiveControllerCluster/member0000000027 0x100055a50b0000c: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000027 0x100055a50b00003: /moot/gmle/ActiveControllerCluster/member0000000025 0x200000130750004: /moot/gmle/ActiveControllerCluster/member0000000026 0x200000130750000: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000026 0x2000001141a000f: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000066 0x200000130750001: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000025 0x2000001141a0001: /moot/gmle/ServiceDirectory/ActiveNodes/member0000000065 0x2000001141a0000: /moot/gmle/ActiveControllerCluster/member0000000065 0x2000001141a0005: /moot/gmle/ActiveControllerCluster/member0000000066 Connections dump: Connections Sets (4)/(2): 0 expire at Tue Sep 20 18:25:40 UTC 2016: 1 expire at Tue Sep 20 18:25:50 UTC 2016: ip: /10.0.0.3:52784 sessionId: 0x0 0 expire at Tue Sep 20 18:26:10 UTC 2016: 1 expire at Tue Sep 20 18:26:20 UTC 2016: ip: /10.0.0.3:50222 sessionId: 0x300000188c30001 |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 22 weeks, 3 days ago | 0|i33wsv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2597 | Add script to merge PR from Apache git repo to Github |
Improvement | Resolved | Minor | Fixed | Edward Ribeiro | Edward Ribeiro | Edward Ribeiro | 21/Sep/16 09:01 | 15/Feb/17 21:24 | 25/Oct/16 22:15 | 0 | 6 | A port of kafka-merge-pr.py to workon on ZooKeeper repo. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 21 weeks, 1 day ago | 0|i33w6f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2596 | Zookeeper.c - gethostname drops subdomain returning only partial FQDN |
Bug | Resolved | Minor | Invalid | Unassigned | Scott Thompson | Scott Thompson | 20/Sep/16 10:10 | 21/Sep/16 04:28 | 20/Sep/16 17:11 | c client | 0 | 2 | RedHat Enterprise Server 7.2 | Nodes fail to connect when a sub-domain is present in the FQDN. The sub-domain is dropped from the hostname string when calling gethostname in zookeeper.c. machine.sub.domain.com becomes machine.domain.com #ifdef HAVE_GETHOSTNAME gethostname(buf, sizeof(buf)); LOG_INFO(LOGCALLBACK(zh), "Client environment:host.name=%s", buf); |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks, 1 day ago | 0|i33ud3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2595 | znode created with acl enabled on it can be deleted by any unauthorised user, when it has no child znodes |
Bug | Open | Major | Unresolved | Unassigned | Neha Bathra | Neha Bathra | 20/Sep/16 09:13 | 20/Sep/16 09:15 | 0 | 2 | user1 sets ACL on one znode for user 2 example : create /xyz data sasl:user2/xyz@XYZ.COM:cdr now user3 can login to zkCli and delete /xyz if it has no children nodes, even when it does not have access |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks, 2 days ago | 0|i33uaf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2594 | Use TLS for downloading artifacts during build |
Improvement | Closed | Blocker | Fixed | Olaf Flebbe | Olaf Flebbe | Olaf Flebbe | 19/Sep/16 15:06 | 31/Mar/17 05:01 | 05/Oct/16 12:33 | 3.4.9, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | build | 0 | 2 | Zookeeper builds are downloading dependencies using the insecure http:// protocol. An outdated java.net repository can be removed now, since its content is now on maven.org. The https://repo2.maven.org cannot be used, since its certificate is invalid. Use repo1.maven.org instead (IMHO this is intentional). Appended you'll find a proposed patch (against git head) to fix these issues, for a starter. |
security | 9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 3 years, 24 weeks, 1 day ago |
Reviewed
|
0|i33t8f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2593 | Enforce the quota limit |
New Feature | Open | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 19/Sep/16 11:40 | 08/May/19 02:13 | java client, server | 0 | 4 | ZOOKEEPER-3301, ZOOKEEPER-451 | Currently in ZooKeeper when quota limit exceeds, a warning is logged. There are many user scenarios where it is desired to throw exception in case quota limits exceed. We should make it configurable whether to throw exception or just log the warning when quota limits exceed. *Implementation:* add new properties {code} enforce.number.quota enforce.byte.quota {code} add new error codes {code} KeeperException.Code.NUMBERQUOTAEXCEED KeeperException.Code.BYTEQUOTAEXCEED {code} add new exception {code} KeeperException.NumberQuotaExceedException KeeperException.ByteQuotaExceedException {code} *Basic Scenarios:* # If enforce.number.quota=true and number quota exceed, then server should send NUMBERQUOTAEXCEED error code and client should throw NumberQuotaExceedException # If enforce.byte.quota=true and byte quota exceed, then server should send BYTEQUOTAEXCEED error code and client should throw ByteQuotaExceedException *Impacted APIs:* create setData |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks ago | 0|i33swn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2592 | Zookeeper is not recoverable once running system( machine on which zookeeper is running) is out of space |
Bug | Open | Critical | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 19/Sep/16 11:01 | 21/Nov/18 20:04 | 3.5.1, 3.5.2 | server | 1 | 7 | Zookeeper is not recoverable once running system( machine on which zookeeper is running) is out of space Steps to reproduce:- 1. Install zookeeper on standalone mode and start zookeeper 2. Make the machine physical memory full 3. Connect through client to zookeeper and trying create some znodes with some data. 4. After sometime creating further znode will not happened as complete memory is occupied 5. Now start creating space in that machine 6. Again connect through a client. Connection is fine. Now try to execute any command like "ls / " it fails even though now space is more than 11gb Client log:- BLR1000007042:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin # df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 36G 24G 11G 70% / udev 1.9G 116K 1.9G 1% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm BLR1000007042:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin # ./zkCli.sh Connecting to localhost:2181 2016-09-19 22:50:20,227 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.1-alpha--1, built on 08/18/2016 08:20 GMT 2016-09-19 22:50:20,231 [myid:] - INFO [main:Environment@109] - Client environment:host.name=BLR1000007042 2016-09-19 22:50:20,231 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.7.0_79 2016-09-19 22:50:20,234 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation 2016-09-19 22:50:20,234 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/java/jdk1.7.0_79/jre 2016-09-19 22:50:20,234 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../build/classes:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../build/lib/*.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/slf4j-log4j12-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/slf4j-api-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/servlet-api-2.5-20081211.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/netty-3.7.0.Final.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/log4j-1.2.16.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jline-2.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jetty-util-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jetty-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/javacc.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jackson-mapper-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jackson-core-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/commons-cli-1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../zookeeper-3.5.1-alpha.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../src/java/lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../conf:/usr/java/jdk1.7.0_79/lib 2016-09-19 22:50:20,234 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-09-19 22:50:20,234 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp 2016-09-19 22:50:20,234 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA> 2016-09-19 22:50:20,235 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux 2016-09-19 22:50:20,235 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64 2016-09-19 22:50:20,235 [myid:] - INFO [main:Environment@109] - Client environment:os.version=3.0.76-0.11-default 2016-09-19 22:50:20,235 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root 2016-09-19 22:50:20,235 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root 2016-09-19 22:50:20,235 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin 2016-09-19 22:50:20,235 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=52MB 2016-09-19 22:50:20,237 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=227MB 2016-09-19 22:50:20,238 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=57MB 2016-09-19 22:50:20,241 [myid:] - INFO [main:ZooKeeper@716] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@3865db85 Welcome to ZooKeeper! 2016-09-19 22:50:20,264 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1138] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2016-09-19 22:50:20,270 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@980] - Socket connection established, initiating session, client: /127.0.0.1:47801, server: localhost/127.0.0.1:2181 JLine support is enabled [INFO] Unable to bind key for unsupported operation: backward-delete-word [INFO] Unable to bind key for unsupported operation: backward-delete-word [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [zk: localhost:2181(CONNECTING) 0] ls / 2016-09-19 22:50:35,280 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1251] - Client session timed out, have not heard from server in 15011ms for sessionid 0x0, closing socket connection and attempting reconnect Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for / at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2255) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2283) at org.apache.zookeeper.cli.LsCommand.exec(LsCommand.java:93) at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:674) at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:577) at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:360) at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:320) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:280) BLR1000007042:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin # -{color:blue} Server log 2016-09-19 22:34:13,380 [myid:] - INFO [main:QuorumPeerConfig@114] - Reading configuration from: /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../conf/zoo.cfg 2016-09-19 22:34:13,386 [myid:] - INFO [main:QuorumPeerConfig@316] - clientPortAddress is 0.0.0.0/0.0.0.0:2181 2016-09-19 22:34:13,386 [myid:] - INFO [main:QuorumPeerConfig@320] - secureClientPort is not set 2016-09-19 22:34:13,389 [myid:] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-09-19 22:34:13,389 [myid:] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-09-19 22:34:13,390 [myid:] - INFO [main:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-09-19 22:34:13,390 [myid:] - WARN [main:QuorumPeerMain@122] - Either no config or no quorum defined in config, running in standalone mode 2016-09-19 22:34:13,402 [myid:] - INFO [main:QuorumPeerConfig@114] - Reading configuration from: /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../conf/zoo.cfg 2016-09-19 22:34:13,402 [myid:] - INFO [main:QuorumPeerConfig@316] - clientPortAddress is 0.0.0.0/0.0.0.0:2181 2016-09-19 22:34:13,402 [myid:] - INFO [main:QuorumPeerConfig@320] - secureClientPort is not set 2016-09-19 22:34:13,403 [myid:] - INFO [main:ZooKeeperServerMain@113] - Starting server 2016-09-19 22:34:13,416 [myid:] - INFO [main:Environment@109] - Server environment:zookeeper.version=3.5.1-alpha--1, built on 08/18/2016 08:20 GMT 2016-09-19 22:34:13,416 [myid:] - INFO [main:Environment@109] - Server environment:host.name=BLR1000007042 2016-09-19 22:34:13,416 [myid:] - INFO [main:Environment@109] - Server environment:java.version=1.7.0_79 2016-09-19 22:34:13,417 [myid:] - INFO [main:Environment@109] - Server environment:java.vendor=Oracle Corporation 2016-09-19 22:34:13,417 [myid:] - INFO [main:Environment@109] - Server environment:java.home=/usr/java/jdk1.7.0_79/jre 2016-09-19 22:34:13,419 [myid:] - INFO [main:Environment@109] - Server environment:java.class.path=/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../build/classes:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../build/lib/*.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/slf4j-log4j12-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/slf4j-api-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/servlet-api-2.5-20081211.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/netty-3.7.0.Final.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/log4j-1.2.16.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jline-2.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jetty-util-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jetty-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/javacc.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jackson-mapper-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jackson-core-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/commons-cli-1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../zookeeper-3.5.1-alpha.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../src/java/lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../conf:/usr/java/jdk1.7.0_79/lib 2016-09-19 22:34:13,420 [myid:] - INFO [main:Environment@109] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-09-19 22:34:13,420 [myid:] - INFO [main:Environment@109] - Server environment:java.io.tmpdir=/tmp 2016-09-19 22:34:13,420 [myid:] - INFO [main:Environment@109] - Server environment:java.compiler=<NA> 2016-09-19 22:34:13,420 [myid:] - INFO [main:Environment@109] - Server environment:os.name=Linux 2016-09-19 22:34:13,420 [myid:] - INFO [main:Environment@109] - Server environment:os.arch=amd64 2016-09-19 22:34:13,421 [myid:] - INFO [main:Environment@109] - Server environment:os.version=3.0.76-0.11-default 2016-09-19 22:34:13,421 [myid:] - INFO [main:Environment@109] - Server environment:user.name=root 2016-09-19 22:34:13,421 [myid:] - INFO [main:Environment@109] - Server environment:user.home=/root 2016-09-19 22:34:13,421 [myid:] - INFO [main:Environment@109] - Server environment:user.dir=/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin 2016-09-19 22:34:13,421 [myid:] - INFO [main:Environment@109] - Server environment:os.memory.free=51MB 2016-09-19 22:34:13,422 [myid:] - INFO [main:Environment@109] - Server environment:os.memory.max=889MB 2016-09-19 22:34:13,422 [myid:] - INFO [main:Environment@109] - Server environment:os.memory.total=57MB 2016-09-19 22:34:13,424 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 4000 2016-09-19 22:34:13,424 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 40000 2016-09-19 22:34:13,424 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/zoo_log/version-2 snapdir /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/data/version-2 2016-09-19 22:34:13,453 [myid:] - INFO [main:Slf4jLog@67] - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2016-09-19 22:34:13,477 [myid:] - INFO [main:Slf4jLog@67] - jetty-6.1.26 2016-09-19 22:34:13,510 [myid:] - INFO [main:Slf4jLog@67] - Started SelectChannelConnector@0.0.0.0:8080 2016-09-19 22:34:13,514 [myid:] - INFO [main:JettyAdminServer@105] - Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands 2016-09-19 22:34:13,521 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-09-19 22:34:13,523 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:2181 2016-09-19 22:34:13,537 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/data/version-2/snapshot.0 2016-09-19 22:34:13,567 [myid:] - INFO [main:ContainerManager@64] - Using checkIntervalMs=60000 maxPerMinute=10000 2016-09-19 22:35:41,907 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /0:0:0:0:0:0:0:1:49485 2016-09-19 22:35:41,917 [myid:] - INFO [NIOWorkerThread-1:ZooKeeperServer@964] - Client attempting to establish new session at /0:0:0:0:0:0:0:1:49485 2016-09-19 22:35:41,919 [myid:] - INFO [SyncThread:0:FileTxnLog@200] - Creating new log file: log.1 2016-09-19 22:35:41,952 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x100632436270000 with negotiated timeout 30000 for client /0:0:0:0:0:0:0:1:49485 2016-09-19 22:40:21,211 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /10.18.221.194:34892 2016-09-19 22:40:21,218 [myid:] - INFO [NIOWorkerThread-8:ZooKeeperServer@964] - Client attempting to establish new session at /10.18.221.194:34892 2016-09-19 22:40:21,221 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x100632436270001 with negotiated timeout 30000 for client /10.18.221.194:34892 2016-09-19 22:40:40,298 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Processed session termination for sessionid: 0x100632436270001 2016-09-19 22:40:40,301 [myid:] - INFO [NIOWorkerThread-3:MBeanRegistry@119] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port2181,name1=Connections,name2=10.18.221.194,name3=0x100632436270001] 2016-09-19 22016-09-19 22:43:47,733 [myid:] - INFO [SyncThread:0:ZooKeeperServer@498] - shutting down 2016-09-19 22:44:39,892 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:47796 2016-09-19 22:44:39,898 [myid:] - INFO [NIOWorkerThread-2:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:47796 2016-09-19 22:45:15,883 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /0:0:0:0:0:0:0:1:49493 2016-09-19 22:45:15,890 [myid:] - INFO [NIOWorkerThread-3:ZooKeeperServer@964] - Client attempting to establish new session at /0:0:0:0:0:0:0:1:49493 2016-09-19 22:45:16,000 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxn@606] - Closed socket connection for client /127.0.0.1:47796 which had sessionid 0x100632436270012 2016-09-19 22:45:46,000 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxn@606] - Closed socket connection for client /0:0:0:0:0:0:0:1:49493 which had sessionid 0x100632436270013 2016-09-19 22:47:42,512 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /0:0:0:0:0:0:0:1:49494 2016-09-19 22:47:42,519 [myid:] - INFO [NIOWorkerThread-4:ZooKeeperServer@964] - Client attempting to establish new session at /0:0:0:0:0:0:0:1:49494 2016-09-19 22:48:16,001 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxn@606] - Closed socket connection for client /0:0:0:0:0:0:0:1:49494 which had sessionid 0x100632436270014 2016-09-19 22:50:20,268 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:47801 2016-09-19 22:50:20,275 [myid:] - INFO [NIOWorkerThread-5:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:47801 2016-09-19 22:50:56,000 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxn@606] - Closed socket connection for client /127.0.0.1:47801 which had sessionid 0x100632436270015 {color} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 17 weeks ago | 0|i33str: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2591 | The deletion of Container znode doesn't check ACL delete permission |
Bug | Resolved | Major | Not A Bug | Edward Ribeiro | Edward Ribeiro | Edward Ribeiro | 17/Sep/16 22:16 | 19/Mar/19 08:45 | 19/Sep/16 10:41 | security, server | 0 | 6 | Container nodes check the ACL before creation, but the deletion doesn't check the ACL rights. The code below succeeds even tough we removed ACL access permissions for "/a". {code} zk.create("/a", null, Ids.OPEN_ACL_UNSAFE, CreateMode.CONTAINER); ArrayList<ACL> list = new ArrayList<>(); list.add(new ACL(0, Ids.ANYONE_ID_UNSAFE)); zk.setACL("/", list, -1); zk.delete("/a", -1); {code} |
container_znode_type | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 36 weeks, 5 days ago | 0|i33ref: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2590 | setACL doesn't affect exists() operation |
Bug | Open | Major | Unresolved | Unassigned | Edward Ribeiro | Edward Ribeiro | 17/Sep/16 21:51 | 06/Nov/19 13:54 | 0 | 2 | As hinted [here|https://github.com/apache/zookeeper/blob/master/src/java/main/org/apache/zookeeper/server/FinalRequestProcessor.java#L298], even if a parent znode path has restricted READ access it's possible to issue an exists() operation on any child znode of that given path. For example, the snippet below doesn't throw {{NoAuthExceptio}}, even tough it removes ACL rights to "/": {code} zk.create("/a", null, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); ArrayList<ACL> acls = new ArrayList<>(); acls.add(new ACL(0, Ids.ANYONE_ID_UNSAFE)); zk.setACL("/", acls, -1); Stat r = zk.exists("/a", false); {code} Also, in the above example, what if the removed READ access for "/a"? Should we allow a call to exists("/a") to succeed even if it returns the znode metadata info? |
acl, security | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 34 weeks ago | 0|i33rdz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2589 | Not able to access znode if IP ACL is set on a znode when zookeeper started in ssl mode |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 17/Sep/16 02:18 | 21/Sep/16 01:12 | 3.5.1 | 0 | 3 | Not able to access znode if IP ACL is set on a znode when zookeeper started in ssl mode. Steps to reproduce:- 1. Start zookeeper in SSL (standalone) mode 2. Create a znode 3. set ip ACL and connect the zkCli and try to access, it does not allow. [zk: localhost:2181(CONNECTED) 3] setAcl /test ip:127.0.0.1:crdwa [zk: localhost:2181(CONNECTED) 5] quit >> start the zkCli with 127.0.0.1 and trying access the znode [zk: 127.0.0.1:2181(CONNECTED) 0] get -s /test Authentication is not valid : /test [zk: 127.0.0.1:2181(CONNECTED) 1] getAcl /test 'ip,'127.0.0.1 : cdrwa [zk: 127.0.0.1:2181(CONNECTED) 2] get /test Authentication is not valid : /test |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks, 1 day ago | 0|i33qsf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2587 | Not handled negative scenario for redo command |
Bug | Resolved | Minor | Duplicate | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 16/Sep/16 01:44 | 06/Oct/16 07:31 | 06/Oct/16 07:31 | java client | 0 | 2 | ZOOKEEPER-2467 | [zk: localhost:2181(CONNECTED) 2] redo -1 Exception in thread "main" java.lang.NullPointerException at java.util.StringTokenizer.<init>(StringTokenizer.java:199) at java.util.StringTokenizer.<init>(StringTokenizer.java:221) at org.apache.zookeeper.ZooKeeperMain$MyCommandOptions.parseCommand(ZooKeeperMain.java:219) at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:638) at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:577) at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:360) at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:320) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:280) |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 24 weeks ago | 0|i33p4n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2586 | zoo_aremove_watchers() does not remove a watch of path has more than one watch |
Bug | Open | Major | Unresolved | Unassigned | prashant | prashant | 16/Sep/16 00:35 | 16/Dec/18 09:46 | 1 | 3 | ZOOKEEPER-1910 | three issues 1)zoo_aremove_watchers() does not remove a watch if path has more than one watch. but it works in below cases. it removes watch if path has only one watch. and it removes all watches if watcher function arguments is NULL. Seen in zookeeper.version=3.5.1-alpha--1, built on 06/09/2016 18:31 GMT Not sure if this is fixed in later versions. 2) If zoo_aremove_watchers() is called with local=1, then client hangs in waiting for mutex in mt_adaptor.c:102 void notify_sync_completion(struct sync_completion *sc) { pthread_mutex_lock(&sc->lock); ... 3) Acts like sync API if no node and no watcher on path. it does not call async completion callback in this case. |
remove_watches | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 51 weeks, 4 days ago | 0|i33p33: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2585 | ACL with SSL is not working |
Bug | Open | Critical | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 15/Sep/16 03:16 | 21/Sep/16 04:48 | 3.5.1 | server | 0 | 3 | Set ACL with SSL is not working Steps to reproduce:- 1. Start zookeeper in ssl mode in standalone 2. Connect zookeeper from zookeeper client (using zkCli.sh) 3. add auth and set ACL as below and then quit the client :- [zk: localhost:2181(CONNECTED) 0] addauth digest u1:p1 [zk: localhost:2181(CONNECTED) 1] create /test_auth hello Created /test_auth [zk: localhost:2181(CONNECTED) 2] setAcl /test_auth auth:u1:p1:crdwa [zk: localhost:2181(CONNECTED) 3] get /test_auth hello [zk: localhost:2181(CONNECTED) 4] quit 4. Connect again zookeeper from zookeeper client (using zkCli.sh) 5. Try to access the znode, try to set the data and so on, everything is allowed [zk: localhost:2181(CONNECTED) 2] set /test_auth hello1 [zk: localhost:2181(CONNECTED) 3] get /test_auth hello1 [zk: localhost:2181(CONNECTED) 1] getAcl /test_auth 'x509,'CN=locahost%2COU=CS%2CO=HUAWEI%2CL=Shenzhen%2CST=Guangdong%2CC=CHINA : cdrwa 'digest,'u1:fpT/y03U+EjItKZOSLGvjnJlyng= : cdrwa |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks, 1 day ago | 0|i33nl3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2584 | when setquota for a znode and set ip/user ACL on /zookeeper/quota, still able to delete the quota from client with another ip though it says "Authentication is not valid" |
Bug | Open | Major | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 15/Sep/16 00:48 | 28/May/19 07:03 | 3.5.1 | server | 0 | 2 | when setquota for a znode and set ip/user ACL on /zookeeper/quota, still able to delete the quota from client with another ip though it says "Authentication is not valid" >> Set quota and ip ACL from one client (with IP 10.18.101.80) [zk: 10.18.101.80:2181(CONNECTED) 9] setquota -n 10 /test [zk: 10.18.101.80:2181(CONNECTED) 10] setAcl /zookeeper/quota ip:10.18.101.80:crdwa [zk: 10.18.101.80:2181(CONNECTED) 11] >> Try to delete the set quota using different client(with ip 10.18.219.50) [zk: 10.18.219.50:2181(CONNECTED) 22] listquota /test absolute path is /zookeeper/quota/test/zookeeper_limits Output quota for /test count=10,bytes=-1 Output stat for /test count=1,bytes=5 [zk: 10.18.219.50:2181(CONNECTED) 23] delquota /test Authentication is not valid : /zookeeper/quota/test [zk: 10.18.219.50:2181(CONNECTED) 24] listquota /test absolute path is /zookeeper/quota/test/zookeeper_limits quota for /test does not exist. >> Here quota has been deleted though it is saying "Authentication is not valid.." which is not correct. Now try to set the quota from another ip itself, it fails which is as expected [zk: 10.18.219.50:2181(CONNECTED) 25] setquota -n 10 /test Authentication is not valid : /zookeeper/quota/test [zk: 10.18.219.50:2181(CONNECTED) 26] listquota /test absolute path is /zookeeper/quota/test/zookeeper_limits quota for /test does not exist. >> Sameway when we set user ACL... [zk: 10.18.101.80:2181(CONNECTED) 26] addauth digest user:pass [zk: 10.18.101.80:2181(CONNECTED) 27] create /test hello Node already exists: /test [zk: 10.18.101.80:2181(CONNECTED) 28] delete /test [zk: 10.18.101.80:2181(CONNECTED) 29] create /test hello Created /test [zk: 10.18.101.80:2181(CONNECTED) 30] [zk: 10.18.101.80:2181(CONNECTED) 30] setquota -n 10 /test [zk: 10.18.101.80:2181(CONNECTED) 31] setAcl /zookeeper/quota auth:user:pass:crdwa [zk: 10.18.101.80:2181(CONNECTED) 32] [zk: 10.18.219.50:2181(CONNECTED) 27] listquota /test absolute path is /zookeeper/quota/test/zookeeper_limits Output quota for /test count=10,bytes=-1 Output stat for /test count=1,bytes=5 [zk: 10.18.219.50:2181(CONNECTED) 28] delquota /test Authentication is not valid : /zookeeper/quota/test [zk: 10.18.219.50:2181(CONNECTED) 29] listquota /test absolute path is /zookeeper/quota/test/zookeeper_limits quota for /test does not exist. [zk: 10.18.219.50:2181(CONNECTED) 30] |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 42 weeks, 2 days ago | 0|i33nhb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2583 | Using one client able to access the znode with localhost but fails from another client when IP ACL is set for znode using 127.0.0.1 |
Bug | Open | Minor | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 14/Sep/16 10:30 | 23/Sep/16 01:04 | 3.5.1 | server | 0 | 3 | Using one client able to access the znode with localhost but fails from another client when IP ACL is set for znode using 127.0.0.1 Start zookeeper in cluster mode. Client 1 :- [zk: localhost:2181(CONNECTED) 11] create /ip_test hello Created /ip_test [zk: localhost:2181(CONNECTED) 12] setAcl /ip_test ip_test ip_test4 [zk: localhost:2181(CONNECTED) 12] setAcl /ip_test ip:127.0.0.1:crdwa [zk: localhost:2181(CONNECTED) 13] get /ip_test hello [zk: localhost:2181(CONNECTED) 14] set /ip_test hi [zk: localhost:2181(CONNECTED) 15] Client 2 :- [zk: localhost:2181(CONNECTED) 0] get /ip_test Authentication is not valid : /ip_test [zk: localhost:2181(CONNECTED) 1] getAcl /ip_test 'ip,'127.0.0.1 : cdrwa [zk: localhost:2181(CONNECTED) 2] quit now quit the client connection and connect again using 127.0.0.1 (like :- ./zkCli.sh -server 127.0.0.1:2181) [zk: 127.0.0.1:2181(CONNECTED) 0] get /ip_test hi |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 27 weeks, 1 day ago | 0|i33mg7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2582 | When addauth twice for same user but different password, it is adding 2 digest corresponding to both username, password and so we can able to access znode with user and any of these password which does not seem to be correct |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 14/Sep/16 09:38 | 20/Sep/16 15:29 | 3.5.1 | server | 0 | 3 | When addauth twice for same user but different password, it is adding 2 digest corresponding to both username, password and so we can able to access znode with user and any of these password which does not seem to be correct Steps:- [zk: localhost:2181(CONNECTED) 0] addauth digest user1:pass1 [zk: localhost:2181(CONNECTED) 1] addauth digest user1:pass [zk: localhost:2181(CONNECTED) 9] create /user_test5 hello Created /user_test5 [zk: localhost:2181(CONNECTED) 10] setAcl /user_test5 auth:user1:pass1:crdwa [zk: localhost:2181(CONNECTED) 11] getAcl /user_test5 'digest,'user1:+7K83PhyQ3ijGj0ADmljf0quVwQ= : cdrwa 'digest,'user1:UZIsvOKp29j8vAahJzjgpA1VTOk= : cdrwa Here we can see 2 entries for same user (user1) with different password Now disconnect the client and connect again using zkCli.sh addauth digest user1:<any of 2 password>, we can able to access the znode. [zk: localhost:2181(CONNECTED) 0] get /user_test5 Authentication is not valid : /user_test5 [zk: localhost:2181(CONNECTED) 1] addauth digest user1:pass [zk: localhost:2181(CONNECTED) 2] get /user_test5 hello Same way, it will allow n number of entry if we addauth for same user with n number of password |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks, 2 days ago | 0|i33mc7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2581 | Not handled NullPointerException while creating key manager and trustManager |
Bug | Resolved | Major | Fixed | maoling | Rakesh Kumar Singh | Rakesh Kumar Singh | 14/Sep/16 03:12 | 11/Sep/17 17:37 | 11/Sep/17 16:57 | 3.5.1 | 3.5.4, 3.6.0 | server | 1 | 6 | Not handled NullPointerException while creating key manager and trustManager:- 2016-09-14 13:35:23,488 [myid:1] - ERROR [CommitProcWorkThread-1:X509AuthenticationProvider@78] - Failed to create key manager org.apache.zookeeper.common.X509Exception$KeyManagerException: java.lang.NullPointerException at org.apache.zookeeper.common.X509Util.createKeyManager(X509Util.java:129) at org.apache.zookeeper.server.auth.X509AuthenticationProvider.<init>(X509AuthenticationProvider.java:75) at org.apache.zookeeper.server.auth.ProviderRegistry.initialize(ProviderRegistry.java:42) at org.apache.zookeeper.server.auth.ProviderRegistry.getProvider(ProviderRegistry.java:68) at org.apache.zookeeper.server.PrepRequestProcessor.checkACL(PrepRequestProcessor.java:319) at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:324) at org.apache.zookeeper.server.quorum.CommitProcessor$CommitWorkRequest.doWork(CommitProcessor.java:296) at org.apache.zookeeper.server.WorkerService$ScheduledWorkRequest.run(WorkerService.java:162) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.zookeeper.common.X509Util.createKeyManager(X509Util.java:113) ... 10 more 2016-09-14 13:35:23,489 [myid:1] - ERROR [CommitProcWorkThread-1:X509AuthenticationProvider@90] - Failed to create trust manager org.apache.zookeeper.common.X509Exception$TrustManagerException: java.lang.NullPointerException at org.apache.zookeeper.common.X509Util.createTrustManager(X509Util.java:158) at org.apache.zookeeper.server.auth.X509AuthenticationProvider.<init>(X509AuthenticationProvider.java:87) at org.apache.zookeeper.server.auth.ProviderRegistry.initialize(ProviderRegistry.java:42) at org.apache.zookeeper.server.auth.ProviderRegistry.getProvider(ProviderRegistry.java:68) at org.apache.zookeeper.server.PrepRequestProcessor.checkACL(PrepRequestProcessor.java:319) at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:324) at org.apache.zookeeper.server.quorum.CommitProcessor$CommitWorkRequest.doWork(CommitProcessor.java:296) at org.apache.zookeeper.server.WorkerService$ScheduledWorkRequest.run(WorkerService.java:162) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.zookeeper.common.X509Util.createTrustManager(X509Util.java:143) ... 10 more |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 27 weeks, 3 days ago | 0|i33lrb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2580 | ErrorMessage is not correct when set IP acl and try to set again from another machine |
Bug | Open | Minor | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 14/Sep/16 00:48 | 23/Sep/16 01:04 | 3.5.1 | java client | 0 | 3 | set IP acl and try to set again from another machine:- [zk: localhost:2181(CONNECTED) 11] setAcl /ip_test ip:10.18.101.80:crdwa KeeperErrorCode = NoAuth for /ip_test |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks ago | 0|i33lnj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2579 | ZooKeeper server should verify that dataDir and snapDir are writeable before starting |
Bug | Closed | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 13/Sep/16 16:49 | 31/Mar/17 05:01 | 18/Sep/16 17:37 | 3.4.9, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | 0 | 4 | If the directories specified for the dataDir or the snapDir are not writeable, the server does not fail until it actually tries to write there. It should fail when it starts. | 9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 3 years, 26 weeks, 4 days ago |
Reviewed
|
0|i33l9j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2578 | zkEnv.sh does not set $ZOOCFG properly if already set |
Bug | Open | Minor | Unresolved | Unassigned | Bjorn Stange | Bjorn Stange | 13/Sep/16 16:30 | 21/Sep/16 11:37 | 3.2.0, 3.2.1, 3.2.2, 3.2.3, 3.3.0, 3.3.1, 3.3.2, 3.3.3, 3.3.4, 3.4.0, 3.4.1, 3.4.2, 3.4.3, 3.3.5, 3.3.6, 3.4.4, 3.4.5, 3.4.6, 3.4.7, 3.4.8, 3.4.9, 3.4.10, 3.5.0, 3.5.1, 3.5.2 | 0 | 3 | In bin/zkEnv.sh The ZOOCFG variable is duplicated over itself if already set. For example, in my use case it was being set in zookeeper-env.sh. The problem seems to arise from this line (line 61 on the master branch at the time of this submission): ZOOCFG="$ZOOCFGDIR/$ZOOCFG". This overwrites the value of ZOOCFG by appending the old value to the value of ZOOCFGDIR, which is problematic if it was already initialized as the absolute path to a file. The behavior of overwriting the value of the variable in this way seems to be specific to the case where ZOOCFG is not initialized. The final state of ZOOCFG seemingly is the absolute path to the zookeeper configuration file. This behavior assumes that it is the filename only, which is where the bug arises. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
3 years, 26 weeks, 1 day ago | zkEnv no longer erroneously overwrites ZOOCFG | 0|i33l8v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2577 | Flaky Test: org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.testDuringLeaderSync |
Test | Resolved | Major | Fixed | Michael Han | Michael Han | Michael Han | 13/Sep/16 14:33 | 27/Jul/17 17:45 | 27/Jul/17 17:14 | 3.5.2 | 3.5.4, 3.6.0 | tests | 0 | 4 | ZOOKEEPER-2135 | {noformat} Error Message zoo.cfg.dynamic.next is not deleted. Stacktrace junit.framework.AssertionFailedError: zoo.cfg.dynamic.next is not deleted. at org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.testDuringLeaderSync(ReconfigDuringLeaderSyncTest.java:155) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) Standard Output 2016-09-13 05:09:25,247 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-09-13 05:09:25,349 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-09-13 05:09:25,370 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testDuringLeaderSync 2016-09-13 05:09:25,372 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testDuringLeaderSync 2016-09-13 05:09:25,375 [myid:] - INFO [main:PortAssignment@151] - Test process 2/8 using ports from 13914 - 16606. 2016-09-13 05:09:25,380 [myid:] - INFO [main:PortAssignment@85] - Assigned port 13915 from range 13914 - 16606. 2016-09-13 05:09:25,380 [myid:] - INFO [main:PortAssignment@85] - Assigned port 13916 from range 13914 - 16606. 2016-09-13 05:09:25,381 [myid:] - INFO [main:PortAssignment@85] - Assigned port 13917 from range 13914 - 16606. 2016-09-13 05:09:25,381 [myid:] - INFO [main:PortAssignment@85] - Assigned port 13918 from range 13914 - 16606. 2016-09-13 05:09:25,381 [myid:] - INFO [main:PortAssignment@85] - Assigned port 13919 from range 13914 - 16606. 2016-09-13 05:09:25,382 [myid:] - INFO [main:PortAssignment@85] - Assigned port 13920 from range 13914 - 16606. 2016-09-13 05:09:25,382 [myid:] - INFO [main:PortAssignment@85] - Assigned port 13921 from range 13914 - 16606. 2016-09-13 05:09:25,382 [myid:] - INFO [main:PortAssignment@85] - Assigned port 13922 from range 13914 - 16606. 2016-09-13 05:09:25,383 [myid:] - INFO [main:PortAssignment@85] - Assigned port 13923 from range 13914 - 16606. 2016-09-13 05:09:25,406 [myid:] - INFO [main:QuorumPeerTestBase$MainThread@131] - id = 0 tmpDir = /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test8397079557861207505.junit.dir clientPort = 13915 adminServerPort = 8080 2016-09-13 05:09:25,416 [myid:] - INFO [main:QuorumPeerTestBase$MainThread@131] - id = 1 tmpDir = /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test1768888919429940621.junit.dir clientPort = 13918 adminServerPort = 8080 2016-09-13 05:09:25,420 [myid:] - INFO [main:QuorumPeerTestBase$MainThread@131] - id = 2 tmpDir = /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test5455612786130415623.junit.dir clientPort = 13921 adminServerPort = 8080 2016-09-13 05:09:25,422 [myid:] - INFO [Thread-0:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test8397079557861207505.junit.dir/zoo.cfg 2016-09-13 05:09:25,422 [myid:] - INFO [Thread-2:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test5455612786130415623.junit.dir/zoo.cfg 2016-09-13 05:09:25,422 [myid:] - INFO [Thread-1:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test1768888919429940621.junit.dir/zoo.cfg 2016-09-13 05:09:25,424 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 13915 2016-09-13 05:09:25,425 [myid:] - INFO [Thread-0:QuorumPeerConfig@318] - clientPortAddress is 0.0.0.0/0.0.0.0:13915 2016-09-13 05:09:25,425 [myid:] - INFO [Thread-0:QuorumPeerConfig@322] - secureClientPort is not set 2016-09-13 05:09:25,425 [myid:] - INFO [Thread-1:QuorumPeerConfig@318] - clientPortAddress is 0.0.0.0/0.0.0.0:13918 2016-09-13 05:09:25,425 [myid:] - INFO [Thread-1:QuorumPeerConfig@322] - secureClientPort is not set 2016-09-13 05:09:25,425 [myid:] - INFO [Thread-2:QuorumPeerConfig@318] - clientPortAddress is 0.0.0.0/0.0.0.0:13921 2016-09-13 05:09:25,426 [myid:] - INFO [Thread-2:QuorumPeerConfig@322] - secureClientPort is not set 2016-09-13 05:09:25,430 [myid:] - INFO [main:ClientBase@248] - server 127.0.0.1:13915 not up java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:99) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:69) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:241) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:232) at org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.testDuringLeaderSync(ReconfigDuringLeaderSyncTest.java:85) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-09-13 05:09:25,444 [myid:1] - INFO [Thread-1:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-09-13 05:09:25,445 [myid:2] - INFO [Thread-2:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-09-13 05:09:25,445 [myid:2] - INFO [Thread-2:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-09-13 05:09:25,444 [myid:0] - INFO [Thread-0:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-09-13 05:09:25,445 [myid:2] - INFO [Thread-2:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-09-13 05:09:25,445 [myid:1] - INFO [Thread-1:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-09-13 05:09:25,446 [myid:1] - INFO [Thread-1:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-09-13 05:09:25,445 [myid:0] - INFO [Thread-0:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-09-13 05:09:25,446 [myid:0] - INFO [Thread-0:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-09-13 05:09:25,446 [myid:2] - INFO [Thread-2:ManagedUtil@46] - Log4j found with jmx enabled. 2016-09-13 05:09:25,447 [myid:0] - INFO [Thread-0:ManagedUtil@46] - Log4j found with jmx enabled. 2016-09-13 05:09:25,446 [myid:1] - INFO [Thread-1:ManagedUtil@46] - Log4j found with jmx enabled. 2016-09-13 05:09:25,552 [myid:1] - ERROR [Thread-1:ManagedUtil@114] - Problems while registering log4j jmx beans! javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-09-13 05:09:25,552 [myid:1] - WARN [Thread-1:QuorumPeerMain@133] - Unable to register log4j JMX control javax.management.JMException: javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:115) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-09-13 05:09:25,553 [myid:1] - INFO [Thread-1:QuorumPeerMain@136] - Starting quorum peer 2016-09-13 05:09:25,553 [myid:2] - ERROR [Thread-2:HierarchyDynamicMBean@138] - Could not add loggerMBean for [root]. javax.management.InstanceAlreadyExistsException: log4j:logger=root at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.HierarchyDynamicMBean.addLoggerMBean(HierarchyDynamicMBean.java:125) at org.apache.log4j.jmx.HierarchyDynamicMBean.postRegister(HierarchyDynamicMBean.java:263) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-09-13 05:09:25,559 [myid:0] - ERROR [Thread-0:ManagedUtil@114] - Problems while registering log4j jmx beans! javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-09-13 05:09:25,560 [myid:0] - WARN [Thread-0:QuorumPeerMain@133] - Unable to register log4j JMX control javax.management.JMException: javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:115) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-09-13 05:09:25,560 [myid:0] - INFO [Thread-0:QuorumPeerMain@136] - Starting quorum peer 2016-09-13 05:09:25,562 [myid:2] - INFO [Thread-2:QuorumPeerMain@136] - Starting quorum peer 2016-09-13 05:09:25,581 [myid:0] - INFO [Thread-0:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-09-13 05:09:25,581 [myid:2] - INFO [Thread-2:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-09-13 05:09:25,585 [myid:1] - INFO [Thread-1:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-09-13 05:09:25,591 [myid:0] - INFO [Thread-0:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:13915 2016-09-13 05:09:25,592 [myid:1] - INFO [Thread-1:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:13918 2016-09-13 05:09:25,593 [myid:2] - INFO [Thread-2:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:13921 2016-09-13 05:09:25,622 [myid:2] - INFO [Thread-2:QuorumPeer@1327] - Local sessions disabled 2016-09-13 05:09:25,622 [myid:2] - INFO [Thread-2:QuorumPeer@1338] - Local session upgrading disabled 2016-09-13 05:09:25,622 [myid:2] - INFO [Thread-2:QuorumPeer@1305] - tickTime set to 4000 2016-09-13 05:09:25,622 [myid:2] - INFO [Thread-2:QuorumPeer@1349] - minSessionTimeout set to 8000 2016-09-13 05:09:25,622 [myid:2] - INFO [Thread-2:QuorumPeer@1360] - maxSessionTimeout set to 80000 2016-09-13 05:09:25,622 [myid:1] - INFO [Thread-1:QuorumPeer@1327] - Local sessions disabled 2016-09-13 05:09:25,622 [myid:1] - INFO [Thread-1:QuorumPeer@1338] - Local session upgrading disabled 2016-09-13 05:09:25,623 [myid:1] - INFO [Thread-1:QuorumPeer@1305] - tickTime set to 4000 2016-09-13 05:09:25,623 [myid:1] - INFO [Thread-1:QuorumPeer@1349] - minSessionTimeout set to 8000 2016-09-13 05:09:25,623 [myid:1] - INFO [Thread-1:QuorumPeer@1360] - maxSessionTimeout set to 80000 2016-09-13 05:09:25,623 [myid:1] - INFO [Thread-1:QuorumPeer@1375] - initLimit set to 10 2016-09-13 05:09:25,622 [myid:0] - INFO [Thread-0:QuorumPeer@1327] - Local sessions disabled 2016-09-13 05:09:25,625 [myid:0] - INFO [Thread-0:QuorumPeer@1338] - Local session upgrading disabled 2016-09-13 05:09:25,625 [myid:0] - INFO [Thread-0:QuorumPeer@1305] - tickTime set to 4000 2016-09-13 05:09:25,625 [myid:0] - INFO [Thread-0:QuorumPeer@1349] - minSessionTimeout set to 8000 2016-09-13 05:09:25,625 [myid:0] - INFO [Thread-0:QuorumPeer@1360] - maxSessionTimeout set to 80000 2016-09-13 05:09:25,625 [myid:0] - INFO [Thread-0:QuorumPeer@1375] - initLimit set to 10 2016-09-13 05:09:25,622 [myid:2] - INFO [Thread-2:QuorumPeer@1375] - initLimit set to 10 2016-09-13 05:09:25,666 [myid:0] - INFO [Thread-0:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-09-13 05:09:25,668 [myid:1] - INFO [Thread-1:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-09-13 05:09:25,668 [myid:2] - INFO [Thread-2:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-09-13 05:09:25,669 [myid:0] - INFO [Thread-0:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-09-13 05:09:25,672 [myid:1] - INFO [Thread-1:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-09-13 05:09:25,677 [myid:2] - INFO [Thread-2:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-09-13 05:09:25,689 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 13915 2016-09-13 05:09:25,704 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:13915:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:38366 2016-09-13 05:09:25,758 [myid:0] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:38366 2016-09-13 05:09:25,770 [myid:1] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:13920 2016-09-13 05:09:25,772 [myid:0] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:38366 (no session established for client) 2016-09-13 05:09:25,783 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-09-13 05:09:25,785 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):FastLeaderElection@894] - New election. My id = 1, proposed zxid=0x0 2016-09-13 05:09:25,797 [myid:0] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:13917 2016-09-13 05:09:25,798 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-09-13 05:09:25,798 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):FastLeaderElection@894] - New election. My id = 0, proposed zxid=0x0 2016-09-13 05:09:25,799 [myid:0] - INFO [/127.0.0.1:13917:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:53023 2016-09-13 05:09:25,806 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-09-13 05:09:25,806 [myid:2] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:13923 2016-09-13 05:09:25,806 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):FastLeaderElection@894] - New election. My id = 2, proposed zxid=0x0 2016-09-13 05:09:25,825 [myid:2] - INFO [/127.0.0.1:13923:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:58679 2016-09-13 05:09:25,826 [myid:0] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 0) 2016-09-13 05:09:25,825 [myid:1] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 1) 2016-09-13 05:09:25,827 [myid:0] - INFO [/127.0.0.1:13917:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:53026 2016-09-13 05:09:25,827 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,827 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,848 [myid:1] - INFO [/127.0.0.1:13920:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:46423 2016-09-13 05:09:25,848 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,857 [myid:2] - INFO [/127.0.0.1:13923:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:58680 2016-09-13 05:09:25,857 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,858 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@915] - Connection broken for id 0, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-09-13 05:09:25,858 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-09-13 05:09:25,862 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,863 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,865 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 2 my id = 0 error = java.net.SocketException: Broken pipe 2016-09-13 05:09:25,867 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-09-13 05:09:25,898 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-09-13 05:09:25,898 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,898 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,899 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 2 my id = 0 2016-09-13 05:09:25,899 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,900 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,901 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,901 [myid:0] - INFO [/127.0.0.1:13917:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:53028 2016-09-13 05:09:25,898 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 0 my id = 2 2016-09-13 05:09:25,901 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,902 [myid:0] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 0) 2016-09-13 05:09:25,903 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,903 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,904 [myid:2] - INFO [/127.0.0.1:13923:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:58684 2016-09-13 05:09:25,905 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-09-13 05:09:25,905 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 0 my id = 2 2016-09-13 05:09:25,905 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-09-13 05:09:25,907 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 2 my id = 0 error = java.net.SocketException: Broken pipe 2016-09-13 05:09:25,907 [myid:0] - INFO [/127.0.0.1:13917:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:53030 2016-09-13 05:09:25,908 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 2 my id = 0 2016-09-13 05:09:25,908 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-09-13 05:09:25,909 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:25,910 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-09-13 05:09:26,022 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 13915 2016-09-13 05:09:26,023 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:13915:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:38382 2016-09-13 05:09:26,032 [myid:0] - INFO [NIOWorkerThread-2:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:38382 2016-09-13 05:09:26,033 [myid:0] - INFO [NIOWorkerThread-2:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:38382 (no session established for client) 2016-09-13 05:09:26,103 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1,name2=LeaderElection] 2016-09-13 05:09:26,104 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):QuorumPeer@1109] - FOLLOWING 2016-09-13 05:09:26,109 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=LeaderElection] 2016-09-13 05:09:26,110 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):Learner@88] - TCP NoDelay set to: true 2016-09-13 05:09:26,110 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):QuorumPeer@1121] - LEADING 2016-09-13 05:09:26,111 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.0,name2=LeaderElection] 2016-09-13 05:09:26,111 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):QuorumPeer@1109] - FOLLOWING 2016-09-13 05:09:26,113 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):Leader@63] - TCP NoDelay set to: true 2016-09-13 05:09:26,113 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):Leader@83] - zookeeper.leader.maxConcurrentSnapshots = 10 2016-09-13 05:09:26,113 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):Leader@85] - zookeeper.leader.maxConcurrentSnapshotTimeout = 5 2016-09-13 05:09:26,122 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:zookeeper.version=3.6.0-SNAPSHOT--1, built on 09/13/2016 05:08 GMT 2016-09-13 05:09:26,122 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:host.name=jenkins-test-5f3 2016-09-13 05:09:26,123 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:java.version=1.7.0_101 2016-09-13 05:09:26,123 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:java.vendor=Oracle Corporation 2016-09-13 05:09:26,123 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre 2016-09-13 05:09:26,123 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/javax.servlet-api-3.1.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-http-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-io-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-security-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-server-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-servlet-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-util-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/slf4j-log4j12-1.7.5.jar:/usr/local/asfpackages/ant/apache-ant-1.9.7/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-09-13 05:09:26,123 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 2016-09-13 05:09:26,123 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:java.io.tmpdir=/tmp 2016-09-13 05:09:26,124 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:java.compiler=<NA> 2016-09-13 05:09:26,124 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:os.name=Linux 2016-09-13 05:09:26,124 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:os.arch=amd64 2016-09-13 05:09:26,124 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:os.version=3.13.0-30-generic 2016-09-13 05:09:26,124 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:user.name=jenkins 2016-09-13 05:09:26,124 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:user.home=/home/jenkins 2016-09-13 05:09:26,124 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:user.dir=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test 2016-09-13 05:09:26,125 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:os.memory.free=49MB 2016-09-13 05:09:26,125 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:os.memory.max=455MB 2016-09-13 05:09:26,125 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Environment@109] - Server environment:os.memory.total=60MB 2016-09-13 05:09:26,126 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):ZooKeeperServer@889] - minSessionTimeout set to 8000 2016-09-13 05:09:26,127 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):ZooKeeperServer@898] - maxSessionTimeout set to 80000 2016-09-13 05:09:26,127 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):ZooKeeperServer@159] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test5455612786130415623.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test5455612786130415623.junit.dir/data/version-2 2016-09-13 05:09:26,126 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):ZooKeeperServer@889] - minSessionTimeout set to 8000 2016-09-13 05:09:26,129 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):ZooKeeperServer@898] - maxSessionTimeout set to 80000 2016-09-13 05:09:26,129 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):ZooKeeperServer@159] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test8397079557861207505.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test8397079557861207505.junit.dir/data/version-2 2016-09-13 05:09:26,129 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Follower@66] - FOLLOWING - LEADER ELECTION TOOK - 18 MS 2016-09-13 05:09:26,126 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):ZooKeeperServer@889] - minSessionTimeout set to 8000 2016-09-13 05:09:26,129 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):Leader@412] - LEADING - LEADER ELECTION TOOK - 20 MS 2016-09-13 05:09:26,140 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test5455612786130415623.junit.dir/data/version-2/snapshot.0 2016-09-13 05:09:26,137 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):ZooKeeperServer@898] - maxSessionTimeout set to 80000 2016-09-13 05:09:26,141 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):ZooKeeperServer@159] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test1768888919429940621.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test1768888919429940621.junit.dir/data/version-2 2016-09-13 05:09:26,141 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):Follower@66] - FOLLOWING - LEADER ELECTION TOOK - 37 MS 2016-09-13 05:09:26,185 [myid:2] - INFO [LearnerHandler-/127.0.0.1:39078:LearnerHandler@382] - Follower sid: 0 : info : 127.0.0.1:13916:13917:participant;127.0.0.1:13915 2016-09-13 05:09:26,191 [myid:2] - INFO [LearnerHandler-/127.0.0.1:39079:LearnerHandler@382] - Follower sid: 1 : info : 127.0.0.1:13919:13920:participant;127.0.0.1:13918 2016-09-13 05:09:26,198 [myid:2] - INFO [LearnerHandler-/127.0.0.1:39079:LearnerHandler@683] - Synchronizing with Follower sid: 1 maxCommittedLog=0x0 minCommittedLog=0x0 lastProcessedZxid=0x0 peerLastZxid=0x0 2016-09-13 05:09:26,198 [myid:2] - INFO [LearnerHandler-/127.0.0.1:39079:LearnerHandler@727] - Sending DIFF zxid=0x0 for peer sid: 1 2016-09-13 05:09:26,198 [myid:2] - INFO [LearnerHandler-/127.0.0.1:39078:LearnerHandler@683] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 lastProcessedZxid=0x0 peerLastZxid=0x0 2016-09-13 05:09:26,199 [myid:2] - INFO [LearnerHandler-/127.0.0.1:39078:LearnerHandler@727] - Sending DIFF zxid=0x0 for peer sid: 0 2016-09-13 05:09:26,201 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):Learner@366] - Getting a diff from the leader 0x0 2016-09-13 05:09:26,206 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):Learner@509] - Learner received NEWLEADER message 2016-09-13 05:09:26,209 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test1768888919429940621.junit.dir/data/version-2/snapshot.0 2016-09-13 05:09:26,210 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Learner@366] - Getting a diff from the leader 0x0 2016-09-13 05:09:26,211 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Learner@509] - Learner received NEWLEADER message 2016-09-13 05:09:26,213 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):Leader@1255] - Have quorum of supporters, sids: [ [1, 2],[1, 2] ]; starting up and setting last processed zxid: 0x100000000 2016-09-13 05:09:26,215 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test8397079557861207505.junit.dir/data/version-2/snapshot.0 2016-09-13 05:09:26,238 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):CommitProcessor@318] - Configuring CommitProcessor with 4 worker threads. 2016-09-13 05:09:26,252 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):ContainerManager@64] - Using checkIntervalMs=60000 maxPerMinute=10000 2016-09-13 05:09:26,254 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Learner@493] - Learner received UPTODATE message 2016-09-13 05:09:26,254 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):Learner@493] - Learner received UPTODATE message 2016-09-13 05:09:26,266 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):CommitProcessor@318] - Configuring CommitProcessor with 4 worker threads. 2016-09-13 05:09:26,272 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):CommitProcessor@318] - Configuring CommitProcessor with 4 worker threads. 2016-09-13 05:09:26,283 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 13915 2016-09-13 05:09:26,284 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:13915:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:38385 2016-09-13 05:09:26,285 [myid:0] - INFO [NIOWorkerThread-3:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:38385 2016-09-13 05:09:26,286 [myid:0] - INFO [NIOWorkerThread-3:StatCommand@49] - Stat command output 2016-09-13 05:09:26,287 [myid:0] - INFO [NIOWorkerThread-3:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:38385 (no session established for client) 2016-09-13 05:09:26,288 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 13918 2016-09-13 05:09:26,288 [myid:1] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:13918:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:55309 2016-09-13 05:09:26,291 [myid:1] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:55309 2016-09-13 05:09:26,291 [myid:1] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-09-13 05:09:26,292 [myid:1] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:55309 (no session established for client) 2016-09-13 05:09:26,292 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 13921 2016-09-13 05:09:26,293 [myid:2] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:13921:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:45002 2016-09-13 05:09:26,297 [myid:2] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:45002 2016-09-13 05:09:26,297 [myid:2] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-09-13 05:09:26,298 [myid:2] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:45002 (no session established for client) 2016-09-13 05:09:26,304 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.6.0-SNAPSHOT--1, built on 09/13/2016 05:08 GMT 2016-09-13 05:09:26,304 [myid:] - INFO [main:Environment@109] - Client environment:host.name=jenkins-test-5f3 2016-09-13 05:09:26,304 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.7.0_101 2016-09-13 05:09:26,305 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation 2016-09-13 05:09:26,305 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre 2016-09-13 05:09:26,305 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/javax.servlet-api-3.1.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-http-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-io-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-security-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-server-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-servlet-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jetty-util-9.2.18.v20160721.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/lib/netty-3.10.5.Final. ...[truncated 31909 chars]... 3] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@915] - Connection broken for id 1, my id = 3, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-09-13 05:09:26,894 [myid:3] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-09-13 05:09:26,894 [myid:3] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-09-13 05:09:26,894 [myid:3] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 3 2016-09-13 05:09:26,895 [myid:1] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@915] - Connection broken for id 3, my id = 1, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-09-13 05:09:26,895 [myid:1] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-09-13 05:09:26,895 [myid:1] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-09-13 05:09:26,896 [myid:1] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 3 my id = 1 2016-09-13 05:09:26,905 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 0 2016-09-13 05:09:26,905 [myid:0] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@915] - Connection broken for id 3, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-09-13 05:09:26,906 [myid:0] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-09-13 05:09:26,905 [myid:3] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@915] - Connection broken for id 0, my id = 3, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-09-13 05:09:26,906 [myid:3] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-09-13 05:09:26,905 [myid:0] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-09-13 05:09:26,913 [myid:3] - WARN [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):Learner@417] - Got zxid 0x100000002 expected 0x1 2016-09-13 05:09:26,917 [myid:3] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-09-13 05:09:26,918 [myid:3] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 0 my id = 3 2016-09-13 05:09:26,917 [myid:0] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 3 my id = 0 2016-09-13 05:09:26,917 [myid:1] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:13920 2016-09-13 05:09:26,919 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):QuorumPeer@1481] - writeToDisk == true but configFilename == null 2016-09-13 05:09:26,920 [myid:3] - WARN [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):QuorumPeer@1417] - Restarting Leader Election 2016-09-13 05:09:26,920 [myid:3] - INFO [/127.0.0.1:13926:QuorumCnxManager$Listener@661] - Leaving listener 2016-09-13 05:09:26,922 [myid:0] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:13917 2016-09-13 05:09:26,936 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):Learner@493] - Learner received UPTODATE message 2016-09-13 05:09:26,936 [myid:3] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:13926 2016-09-13 05:09:27,036 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):CommitProcessor@318] - Configuring CommitProcessor with 4 worker threads. 2016-09-13 05:09:27,061 [myid:3] - INFO [SyncThread:3:FileTxnLog@204] - Creating new log file: log.100000002 2016-09-13 05:09:28,635 [myid:127.0.0.1:13924] - INFO [main-SendThread(127.0.0.1:13924):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:13924. Will not attempt to authenticate using SASL (unknown error) 2016-09-13 05:09:28,635 [myid:127.0.0.1:13924] - INFO [main-SendThread(127.0.0.1:13924):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:50397, server: 127.0.0.1/127.0.0.1:13924 2016-09-13 05:09:28,636 [myid:3] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:13924:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:50397 2016-09-13 05:09:28,651 [myid:3] - INFO [NIOWorkerThread-2:ZooKeeperServer@995] - Client attempting to establish new session at /127.0.0.1:50397 2016-09-13 05:09:28,652 [myid:3] - WARN [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):Follower@122] - Got zxid 0x100000003 expected 0x1 2016-09-13 05:09:28,654 [myid:3] - INFO [CommitProcWorkThread-1:ZooKeeperServer@709] - Established session 0x3000048422c0000 with negotiated timeout 30000 for client /127.0.0.1:50397 2016-09-13 05:09:28,661 [myid:127.0.0.1:13924] - INFO [main-SendThread(127.0.0.1:13924):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:13924, sessionid = 0x3000048422c0000, negotiated timeout = 30000 2016-09-13 05:09:28,690 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@98] - TEST METHOD FAILED testDuringLeaderSync java.lang.AssertionError: zoo.cfg.dynamic.next is not deleted. at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertFalse(Assert.java:64) at org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.testDuringLeaderSync(ReconfigDuringLeaderSyncTest.java:155) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-09-13 05:09:28,692 [myid:] - INFO [main:QuorumBase@394] - Shutting down quorum peer QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled) 2016-09-13 05:09:28,692 [myid:] - INFO [main:Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1184) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:395) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:60) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:257) at org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.tearDown(ReconfigDuringLeaderSyncTest.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-09-13 05:09:28,693 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.0,name2=Follower,name3=Connections,name4=127.0.0.1,name5=0x483fd50000] 2016-09-13 05:09:28,694 [myid:] - INFO [main:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:38389 which had sessionid 0x483fd50000 2016-09-13 05:09:28,694 [myid:] - INFO [main:LearnerZooKeeperServer@165] - Shutting down 2016-09-13 05:09:28,694 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-09-13 05:09:28,694 [myid:] - INFO [main:FollowerRequestProcessor@138] - Shutting down 2016-09-13 05:09:28,695 [myid:127.0.0.1:13915] - INFO [main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1231] - Unable to read additional data from server sessionid 0x483fd50000, likely server has closed socket, closing socket connection and attempting reconnect 2016-09-13 05:09:28,695 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-09-13 05:09:28,695 [myid:0] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@109] - FollowerRequestProcessor exited loop! 2016-09-13 05:09:28,696 [myid:0] - INFO [CommitProcessor:0:CommitProcessor@299] - CommitProcessor exited loop! 2016-09-13 05:09:28,696 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-09-13 05:09:28,697 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.0,name2=Follower,name3=InMemoryDataTree] 2016-09-13 05:09:28,697 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-09-13 05:09:28,697 [myid:0] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-09-13 05:09:28,698 [myid:0] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-09-13 05:09:28,698 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:13915:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-09-13 05:09:28,699 [myid:0] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-09-13 05:09:28,699 [myid:0] - INFO [/127.0.0.1:13917:QuorumCnxManager$Listener@661] - Leaving listener 2016-09-13 05:09:28,700 [myid:] - INFO [main:QuorumBase@398] - Shutting down leader election QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled) 2016-09-13 05:09:28,700 [myid:] - INFO [main:QuorumBase@403] - Waiting for QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled) to exit thread 2016-09-13 05:09:29,738 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-09-13 05:09:29,738 [myid:0] - INFO [WorkerSender[myid=0]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-09-13 05:09:29,776 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-09-13 05:09:29,785 [myid:3] - INFO [WorkerSender[myid=3]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-09-13 05:09:29,785 [myid:1] - INFO [WorkerSender[myid=1]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-09-13 05:09:29,789 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-09-13 05:09:29,790 [myid:2] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-09-13 05:09:29,792 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-09-13 05:09:29,922 [myid:0] - INFO [WorkerSender[myid=0]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-09-13 05:09:29,923 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-09-13 05:09:30,255 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.0,name2=Follower] 2016-09-13 05:09:30,256 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1115) 2016-09-13 05:09:30,256 [myid:0] - WARN [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-09-13 05:09:30,256 [myid:0] - WARN [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-09-13 05:09:30,256 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0] 2016-09-13 05:09:30,256 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.0] 2016-09-13 05:09:30,256 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.1] 2016-09-13 05:09:30,257 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.2] 2016-09-13 05:09:30,257 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:13915)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.3] 2016-09-13 05:09:30,258 [myid:] - INFO [main:QuorumBase@394] - Shutting down quorum peer QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled) 2016-09-13 05:09:30,258 [myid:] - INFO [main:Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1184) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:395) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:60) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:257) at org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.tearDown(ReconfigDuringLeaderSyncTest.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-09-13 05:09:30,259 [myid:] - INFO [main:LearnerZooKeeperServer@165] - Shutting down 2016-09-13 05:09:30,259 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-09-13 05:09:30,259 [myid:] - INFO [main:FollowerRequestProcessor@138] - Shutting down 2016-09-13 05:09:30,259 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-09-13 05:09:30,259 [myid:1] - INFO [FollowerRequestProcessor:1:FollowerRequestProcessor@109] - FollowerRequestProcessor exited loop! 2016-09-13 05:09:30,259 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-09-13 05:09:30,259 [myid:1] - INFO [CommitProcessor:1:CommitProcessor@299] - CommitProcessor exited loop! 2016-09-13 05:09:30,260 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1,name2=Follower,name3=InMemoryDataTree] 2016-09-13 05:09:30,260 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-09-13 05:09:30,260 [myid:1] - INFO [SyncThread:1:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-09-13 05:09:30,261 [myid:1] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-09-13 05:09:30,261 [myid:1] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:13918:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-09-13 05:09:30,261 [myid:1] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-09-13 05:09:30,261 [myid:1] - INFO [/127.0.0.1:13920:QuorumCnxManager$Listener@661] - Leaving listener 2016-09-13 05:09:30,261 [myid:] - INFO [main:QuorumBase@398] - Shutting down leader election QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled) 2016-09-13 05:09:30,262 [myid:] - INFO [main:QuorumBase@403] - Waiting for QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled) to exit thread 2016-09-13 05:09:30,624 [myid:127.0.0.1:13915] - INFO [main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:13915. Will not attempt to authenticate using SASL (unknown error) 2016-09-13 05:09:30,624 [myid:127.0.0.1:13915] - WARN [main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1235] - Session 0x483fd50000 for server 127.0.0.1/127.0.0.1:13915, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-09-13 05:09:31,733 [myid:127.0.0.1:13915] - INFO [main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:13915. Will not attempt to authenticate using SASL (unknown error) 2016-09-13 05:09:31,734 [myid:127.0.0.1:13915] - WARN [main-SendThread(127.0.0.1:13915):ClientCnxn$SendThread@1235] - Session 0x483fd50000 for server 127.0.0.1/127.0.0.1:13915, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:357) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) 2016-09-13 05:09:32,255 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1,name2=Follower] 2016-09-13 05:09:32,256 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1115) 2016-09-13 05:09:32,256 [myid:1] - WARN [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-09-13 05:09:32,256 [myid:1] - WARN [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-09-13 05:09:32,256 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1] 2016-09-13 05:09:32,257 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1] 2016-09-13 05:09:32,257 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.0] 2016-09-13 05:09:32,257 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.2] 2016-09-13 05:09:32,257 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:13918)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.3] 2016-09-13 05:09:32,257 [myid:] - INFO [main:QuorumBase@394] - Shutting down quorum peer QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled) 2016-09-13 05:09:32,258 [myid:] - INFO [main:Leader@623] - Shutting down 2016-09-13 05:09:32,258 [myid:] - INFO [main:Leader@629] - Shutdown called java.lang.Exception: shutdown Leader! reason: quorum Peer shutdown at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:629) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1181) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:395) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:60) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:257) at org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.tearDown(ReconfigDuringLeaderSyncTest.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-09-13 05:09:32,259 [myid:] - INFO [main:ZooKeeperServer@529] - shutting down 2016-09-13 05:09:32,259 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-09-13 05:09:32,259 [myid:] - INFO [main:LeaderRequestProcessor@77] - Shutting down 2016-09-13 05:09:32,259 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-09-13 05:09:32,259 [myid:] - INFO [main:ProposalRequestProcessor@88] - Shutting down 2016-09-13 05:09:32,259 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-09-13 05:09:32,259 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-09-13 05:09:32,260 [myid:2] - INFO [CommitProcessor:2:CommitProcessor@299] - CommitProcessor exited loop! 2016-09-13 05:09:32,260 [myid:] - INFO [main:Leader$ToBeAppliedRequestProcessor@924] - Shutting down 2016-09-13 05:09:32,260 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-09-13 05:09:32,261 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-09-13 05:09:32,262 [myid:2] - INFO [SyncThread:2:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-09-13 05:09:32,262 [myid:2] - INFO [LearnerCnxAcceptor-/127.0.0.1:13922:Leader$LearnerCnxAcceptor@373] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2016-09-13 05:09:32,263 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=Leader,name3=InMemoryDataTree] 2016-09-13 05:09:32,264 [myid:2] - WARN [LearnerHandler-/127.0.0.1:39078:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:39078 ******** 2016-09-13 05:09:32,265 [myid:3] - WARN [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):Follower@93] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:155) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:89) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1111) 2016-09-13 05:09:32,265 [myid:2] - WARN [LearnerHandler-/127.0.0.1:39079:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:39079 ******** 2016-09-13 05:09:32,265 [myid:2] - WARN [LearnerHandler-/127.0.0.1:39104:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:39104 ******** 2016-09-13 05:09:32,265 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3,name2=Follower] 2016-09-13 05:09:32,265 [myid:2] - WARN [LearnerHandler-/127.0.0.1:39104:LearnerHandler@903] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:901) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:622) 2016-09-13 05:09:32,265 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1115) 2016-09-13 05:09:32,266 [myid:2] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:13921:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-09-13 05:09:32,266 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3,name2=Follower,name3=Connections,name4=127.0.0.1,name5=0x3000048422c0000] 2016-09-13 05:09:32,266 [myid:2] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-09-13 05:09:32,266 [myid:2] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-09-13 05:09:32,266 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:50397 which had sessionid 0x3000048422c0000 2016-09-13 05:09:32,267 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):LearnerZooKeeperServer@165] - Shutting down 2016-09-13 05:09:32,267 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):ZooKeeperServer@529] - shutting down 2016-09-13 05:09:32,267 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):FollowerRequestProcessor@138] - Shutting down 2016-09-13 05:09:32,267 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):CommitProcessor@414] - Shutting down 2016-09-13 05:09:32,267 [myid:127.0.0.1:13924] - INFO [main-SendThread(127.0.0.1:13924):ClientCnxn$SendThread@1231] - Unable to read additional data from server sessionid 0x3000048422c0000, likely server has closed socket, closing socket connection and attempting reconnect 2016-09-13 05:09:32,268 [myid:3] - INFO [FollowerRequestProcessor:3:FollowerRequestProcessor@109] - FollowerRequestProcessor exited loop! 2016-09-13 05:09:32,268 [myid:3] - INFO [CommitProcessor:3:CommitProcessor@299] - CommitProcessor exited loop! 2016-09-13 05:09:32,268 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):FinalRequestProcessor@479] - shutdown of request processor complete 2016-09-13 05:09:32,269 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3,name2=Follower,name3=InMemoryDataTree] 2016-09-13 05:09:32,269 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):SyncRequestProcessor@191] - Shutting down 2016-09-13 05:09:32,269 [myid:3] - INFO [SyncThread:3:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-09-13 05:09:32,269 [myid:3] - WARN [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-09-13 05:09:32,269 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-09-13 05:09:32,270 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):FileSnap@83] - Reading snapshot /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/build/test/tmp/test1465497725988500626.junit.dir/data/version-2/snapshot.100000001 2016-09-13 05:09:32,271 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=Leader] 2016-09-13 05:09:32,272 [myid:2] - WARN [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):QuorumPeer@1127] - Unexpected exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:561) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1124) 2016-09-13 05:09:32,272 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):Leader@623] - Shutting down 2016-09-13 05:09:32,272 [myid:2] - WARN [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-09-13 05:09:32,272 [myid:2] - WARN [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-09-13 05:09:32,272 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2] 2016-09-13 05:09:32,272 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2] 2016-09-13 05:09:32,272 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.0] 2016-09-13 05:09:32,273 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.1] 2016-09-13 05:09:32,273 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.3] 2016-09-13 05:09:32,274 [myid:2] - INFO [/127.0.0.1:13923:QuorumCnxManager$Listener@661] - Leaving listener 2016-09-13 05:09:32,274 [myid:] - INFO [main:QuorumBase@398] - Shutting down leader election QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled) 2016-09-13 05:09:32,274 [myid:] - INFO [main:QuorumBase@403] - Waiting for QuorumPeer[myid=2](plain=/127.0.0.1:13921)(secure=disabled) to exit thread 2016-09-13 05:09:32,274 [myid:] - INFO [main:QuorumBase@394] - Shutting down quorum peer QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled) 2016-09-13 05:09:32,275 [myid:3] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-09-13 05:09:32,275 [myid:3] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:13924:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-09-13 05:09:32,276 [myid:3] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-09-13 05:09:32,276 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):FastLeaderElection@894] - New election. My id = 3, proposed zxid=0x100000004 2016-09-13 05:09:32,277 [myid:3] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 0 at election address /127.0.0.1:13917 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-09-13 05:09:32,277 [myid:3] - INFO [/127.0.0.1:13926:QuorumCnxManager$Listener@661] - Leaving listener 2016-09-13 05:09:32,277 [myid:3] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 0 at election address /127.0.0.1:13917 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:489) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-09-13 05:09:32,278 [myid:3] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 1 at election address /127.0.0.1:13920 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-09-13 05:09:32,278 [myid:] - INFO [main:QuorumBase@398] - Shutting down leader election QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled) 2016-09-13 05:09:32,279 [myid:] - INFO [main:QuorumBase@403] - Waiting for QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled) to exit thread 2016-09-13 05:09:32,279 [myid:3] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 1 at election address /127.0.0.1:13920 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:489) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-09-13 05:09:32,280 [myid:3] - INFO [WorkerSender[myid=3]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-09-13 05:09:32,280 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3,name2=LeaderElection] 2016-09-13 05:09:32,280 [myid:3] - WARN [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-09-13 05:09:32,280 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3] 2016-09-13 05:09:32,280 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3] 2016-09-13 05:09:32,281 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.0] 2016-09-13 05:09:32,281 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.1] 2016-09-13 05:09:32,281 [myid:3] - INFO [QuorumPeer[myid=3](plain=/0:0:0:0:0:0:0:0:13924)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.2] 2016-09-13 05:09:32,283 [myid:] - INFO [main:ZKTestCase$1@70] - FAILED testDuringLeaderSync java.lang.AssertionError: zoo.cfg.dynamic.next is not deleted. at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertFalse(Assert.java:64) at org.apache.zookeeper.server.quorum.ReconfigDuringLeaderSyncTest.testDuringLeaderSync(ReconfigDuringLeaderSyncTest.java:155) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-09-13 05:09:32,284 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testDuringLeaderSync {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 34 weeks ago | 0|i33l1r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2576 | After svn to git migration ZooKeeper Precommit jenkins job is failing. |
Bug | Resolved | Blocker | Fixed | Patrick D. Hunt | Patrick D. Hunt | Patrick D. Hunt | 12/Sep/16 12:46 | 16/Oct/16 10:58 | 12/Sep/16 23:23 | 3.6.0 | build | 0 | 3 | After moving from svn to git the precommit job is failing. I've disabled it temporarily. https://builds.apache.org/view/S-Z/view/ZooKeeper/job/PreCommit-ZOOKEEPER-Build/ |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 27 weeks, 2 days ago | 0|i33j1z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2575 | /./// does not have the form scheme:id:perm and client is quit. |
Bug | Open | Minor | Unresolved | kevin.chen | Prabhunath Yadav | Prabhunath Yadav | 12/Sep/16 01:34 | 23/Nov/16 04:41 | 3.3.3 | java client | 0 | 4 | while creating node using command (random arguments like this). create / /./// or some wrong format it shows the message /./// does not have the form scheme:id:perm with Exception in thread "main" org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode=InvalidACL ..... It should give the accurate message but it should not get closed or quit. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 17 weeks, 1 day ago | 0|i33i6n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2574 | PurgeTxnLog can inadvertently delete required txn log files |
Bug | Closed | Major | Fixed | Abhishek Rai | Abhishek Rai | Abhishek Rai | 09/Sep/16 16:20 | 15/Nov/17 16:29 | 22/Jan/17 20:31 | 3.4.7, 3.4.8, 3.5.0, 3.5.1, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | server | 0 | 9 | ZOOKEEPER-1797, ZOOKEEPER-2420, ZOOKEEPER-2671 | Zookeeper 3.4.8, standalone, and 3-server quorum | As part of the fix for ZOOKEEPER-1797, the call to FileTxnSnapLog.getSnapshotLogs() was removed from PurgeTxnLog.java. As a result, some old-looking but required txn log files can be deleted, resulting in data corruption or loss. For example, consider the following: 1. Configuration: autopurge.snapRetainCount=3 2. Following files exist: log.100 spans transactions from zxid=100 till zxid=140 (inclusive) snapshot.110 - snapshot as of zxid=110 snapshot.120 - snapshot as of zxid=120 snapshot.130 - snapshot as of zxid=130 Above scenario is possible when snapshotting has happened multiple times but without accompanying log rollover, which is possible if the server was running as a learner. 3. PurgeTxnLog retains all snapshots but deletes log.100 because its zxid is older than the zxid of the oldest snapshot (110). This results in loss of transactions in the range 131-140. Before the fix for ZOOKEEPER-1797, this was avoided by the call to FileTxnSnapLog.getSnapshotLogs() which finds and retains the newest txn log file with starting zxid < oldest retained snapshot's highest zxid. |
9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 2 years, 18 weeks, 1 day ago | 0|i33glz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2573 | Modify Info.REVISION to adapt git repo |
Bug | Closed | Major | Fixed | Edward Ribeiro | Mohammad Arshad | Mohammad Arshad | 09/Sep/16 09:35 | 11/Apr/17 18:45 | 25/Jan/17 07:52 | 3.4.9, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | build, server | 0 | 7 | Modify {{org.apache.zookeeper.version.Info.REVISION}} to store git repo revision Currently {{org.apache.zookeeper.version.Info.REVISION}} stores the svn repo revision which is of type int But after migrating to git repo the git repo's revision(commit 63f5132716c08b3d8f18993bf98eb46eb42f80fb) can not be stored in this variable. So either we should modify this variable to string to introduce new variable to store the git revision and leave the svn revision variable unchanged. build.xml, and org.apache.zookeeper.version.util.VerGen also need to be modified. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 49 weeks, 2 days ago | After the migration of ZooKeeper's version control system from 'svn repo' to 'apache git repo' the revision info becomes git's SHA-1 hash value. | 0|i33fyn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2572 | Potential resource leak in FileTxnLog.truncate |
Bug | Open | Major | Unresolved | maoling | Michael Han | Michael Han | 09/Sep/16 01:50 | 19/Feb/19 06:46 | 3.4.9, 3.5.2, 3.4.11 | server | 0 | 5 | In FileTxnLog.truncate, we have: {code} public boolean truncate(long zxid) throws IOException { FileTxnIterator itr = null; try { itr = new FileTxnIterator(this.logDir, zxid); PositionInputStream input = itr.inputStream; if(input == null) { throw new IOException("No log files found to truncate! This could " + "happen if you still have snapshots from an old setup or " + "log files were deleted accidentally or dataLogDir was changed in zoo.cfg."); } long pos = input.getPosition(); // now, truncate at the current position RandomAccessFile raf=new RandomAccessFile(itr.logFile,"rw"); raf.setLength(pos); raf.close(); while(itr.goToNextLog()) { if (!itr.logFile.delete()) { LOG.warn("Unable to truncate {}", itr.logFile); } } } finally { close(itr); } return true; } {code} {{raf}} here can be potentially in a state of not closed after leaving the method, if there is an (IO) exception thrown from setLength. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 28 weeks ago | 0|i33fdj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2571 | Potential resource leak in QuorumPeer.writeLongToFile |
Bug | Open | Major | Unresolved | Unassigned | Michael Han | Michael Han | 09/Sep/16 01:33 | 22/Jun/18 00:49 | 3.4.9, 3.4.11 | server | 0 | 1 | In QuorumPeer.writeLongToFile we have: {code} try { bw.write(Long.toString(value)); bw.flush(); out.flush(); } catch (IOException e) { LOG.error("Failed to write new file " + file, e); // worst case here the tmp file/resources(fd) are not cleaned up // and the caller will be notified (IOException) aborted = true; out.abort(); throw e; } finally { if (!aborted) { // if the close operation (rename) fails we'll get notified. // worst case the tmp file may still exist out.close(); } } {code} So if any unchecked exception thrown during write (e.g. out of memory, you never know), the output stream will not be closed. The fix is can be made by having the flag set at the end of the try block instead of of in the catch block, which only catch a specific type of exception (which is what ZOOKEEPER-1835 did, thus the same issue does not exist in 3.5.x branch.). |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 27 weeks, 6 days ago | 0|i33fbr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2570 | ZooKeeper clients are timed out when ZooKeeper servers are very busy |
Bug | Open | Critical | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 08/Sep/16 12:27 | 13/Oct/16 14:59 | 0 | 3 | ZooKeeper clients are timed out when ZooKeeper servers are very busy. Clients throw below exception and fail all the pending operations {code} org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) {code} Clients log bellow information {noformat} 2016-09-22 01:49:08,001 [myid:127.0.0.1:11228] - WARN [main-SendThread(127.0.0.1:11228):ClientCnxn$SendThread@1181] - Client session timed out, have not heard from server in 13908ms for sessionid 0x20000d21b280000 2016-09-22 01:49:08,001 [myid:127.0.0.1:11228] - INFO [main-SendThread(127.0.0.1:11228):ClientCnxn$SendThread@1229] - Client session timed out, have not heard from server in 13908ms for sessionid 0x20000d21b280000, closing socket connection and attempting reconnect {noformat} *STEPS TO REPRODECE:* # Create multi operation {code} List<Op> ops = new ArrayList<Op>(); for (int i = 0; i < N; i++) { Op create = Op.create(rootNode + "/" + i, "".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); ops.add(create); } {code} Chose N in such a way that the total multi operation request size is less than but near 1 MB. For bigger request size increase jute.maxbuffer in servers # Submit the multi operation request {code} zooKeeper.multi(ops);{code} # After repeating above steps few times issue is reproduced |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 23 weeks ago | 0|i33ei7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2569 | plain password is stored when set individual ACL using digest scheme |
Bug | Open | Major | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 08/Sep/16 09:58 | 23/Sep/16 01:05 | 3.5.1 | security | 0 | 3 | Plain password is stored when set individual ACL using digest scheme instead of storing the username and encoded hash string of <username:password> [zk: localhost:2181(CONNECTED) 13] addauth digest user:pass [zk: localhost:2181(CONNECTED) 14] setAcl /newNode digest:user:pass:crdwa [zk: localhost:2181(CONNECTED) 15] getAcl /newNode 'digest,'user:pass : cdrwa [zk: localhost:2181(CONNECTED) 16] |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks ago | 0|i33ean: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2568 | Node created having name with space is not deleted with delete command |
Bug | Resolved | Minor | Information Provided | Unassigned | Prabhunath Yadav | Prabhunath Yadav | 08/Sep/16 07:18 | 12/Sep/16 22:19 | 08/Sep/16 18:36 | 3.4.6 | java client, server | 0 | 1 | In java using | For Example : String myNode="/MyNode"+new Date() ; connector.createNode(newNode, new Date().toString().getBytes()); and createNode is defined as: public void createNode(String path, byte[] data) throws Exception { zk.create(path, data, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } if we delete the node from command delete /MyNodeFri Aug 12 09:42:16 GMT+05:30 2016 then we get exception saying Command failed:java.lang.NumberFormatException: for input string : "Aug" How to delete such node ? may rmr command can remove but why delete command not working ? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 |
Important
|
3 years, 27 weeks, 2 days ago | 0|i33e1z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2567 | Error message is not correct when wrong argument is passed for "reconfig" cmd |
Bug | Open | Minor | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 08/Sep/16 07:00 | 23/Nov/16 09:53 | java client | 0 | 3 | Error message is not correct when wrong argument is passed for "reconfig" cmd Steps to reproduce:- 1. Start zookeeper in cluster mode 2. use reconfig cmd with wrong argument (pass : instead of ;) [zk: localhost:2181(CONNECTED) 10] reconfig -remove 3 -add 3=10.18.221.194:2888:3888:2181 KeeperErrorCode = BadArguments for Here error message is not complete and informative on client console. The log is as below:- 2016-09-08 18:54:08,701 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@512] - Incremental reconfig 2016-09-08 18:54:08,702 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@843] - Got user-level KeeperException when processing sessionid:0x100299b7eac0000 type:reconfig cxid:0x7 zxid:0x400000004 txntype:-1 reqpath:n/a Error Path:Reconfiguration failed Error:KeeperErrorCode = BadArguments for Reconfiguration failed |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 26 weeks, 2 days ago | 0|i33e0v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2566 | space should be truncated while reading password for keystore/truststore which is required to configure while SSL enabled |
Bug | Open | Minor | Unresolved | Unassigned | Athyab Ameer | Athyab Ameer | 08/Sep/16 05:51 | 08/Sep/16 18:37 | 3.5.1 | server | 0 | 2 | ZOOKEEPER-2521 | space should be truncated while reading password for keystore/truststore which is required to configure while SSL enabled. As of now if we configure the password with any heading/trailing space, the zookeeper server will fail to start. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks ago | 0|i33dxb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2565 | listquota <path> should display the quota even it is set on parent/child node |
Bug | Open | Minor | Unresolved | kevin.chen | Rakesh Kumar Singh | Rakesh Kumar Singh | 08/Sep/16 05:29 | 26/May/19 02:47 | 3.5.1 | server | 0 | 7 | 0 | 2400 | listquota <path> should display the quota even it is set on parent/child node. As of now if we have a parent-child hierarchy for example n1->n2->n3 and quota is set for n2. If we try to get quota details on n1 and n3 using listquota, it says no quota set but if try to set the quota on those nodes it fails saying quota is already set on parent node... So listquota should fetch the quota set on any node in hierarchy with exact path (on which quota is set) even though this api is called on any other node in that hierarchy. |
100% | 100% | 2400 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 11 weeks, 2 days ago | 0|i33duf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2564 | No message is prompted when trying to delete quota with different quota option |
Bug | Open | Minor | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 08/Sep/16 02:57 | 28/May/19 06:09 | 3.5.1 | server | 0 | 4 | No message is prompted when trying to delete quota with different quota option. Steps to reproduce:- 1. Start zookeeper in cluster mode 2. Create some node and set quota like setquota -n 10 /test 3. Now try to delete as below:- delquota -b /test Here no message/exception is prompted. We should prompt message like "Byte Quota does not exist for /test" |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 42 weeks, 2 days ago | 0|i33dlb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2563 | A revisit to setquota |
Bug | Resolved | Major | Fixed | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 08/Sep/16 02:19 | 17/Jul/19 17:33 | 17/Jul/19 07:42 | 3.5.1 | 3.6.0 | server | 0 | 3 | 0 | 7200 | delquota -[n|b] is not deleting the set quota properly Steps to reproduce:- 1. Start zookeeper in cluster mode (ssl) 2. create some node say /test 3. Run command as listquota says (as expected) quota for /test does not exist 4. setquota let say setquota -n 10 /test 5. Now try to delete this as below delquota -n /test 6. now check the quota [zk: localhost:2181(CONNECTED) 1] listquota /test absolute path is /zookeeper/quota/test/zookeeper_limits Output quota for /test count=-1,bytes=-1 Output stat for /test count=1,bytes=5 7. Here it is not deleted quota node for test 8. Now try to set some new quota It fails as it is not deleted correctly while delete [zk: localhost:2181(CONNECTED) 3] setquota -n 11 /test Command failed: java.lang.IllegalArgumentException: /test has a parent /zookeeper/quota/test which has a quota But through delquota it is able to delete |
100% | 100% | 7200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 35 weeks, 1 day ago | 0|i33dj3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2562 | Safely persist renames of *epoch.tmp to *epoch by issuing fsync on parent directory -- Possible cluster unavailability otherwise |
Bug | Open | Major | Unresolved | Unassigned | Ramnatthan Alagappan | Ramnatthan Alagappan | 07/Sep/16 14:54 | 07/Sep/16 14:54 | 3.4.7 | 0 | 1 | Three node linux cluster | I am running a three node ZooKeeper cluster. Renames of acceptedEpoch.tmp to acceptedEpoch and currentEpoch.tmp to currentEpoch have to persisted to disk by explicitly issuing fsync on the parent directory. If not, the rename might not hit the disk immediately and if a crash occurs at this point, then the server would fail to start with the following error in the log. If this happens on two more or nodes, then the cluster can become unavailable. [myid:] - INFO [main:QuorumPeerConfig@103] - Reading configuration from: /tmp/zoo2.cfg [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.2 to address: /127.0.0.2 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.4 to address: /127.0.0.4 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.3 to address: /127.0.0.3 [myid:] - INFO [main:QuorumPeerConfig@331] - Defaulting to majority quorums [myid:1] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 [myid:1] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 [myid:1] - INFO [main:DatadirCleanupManager@101] - Purge task is not scheduled. [myid:1] - INFO [main:QuorumPeerMain@127] - Starting quorum peer [myid:1] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2182 [myid:1] - INFO [main:QuorumPeer@1019] - tickTime set to 2000 [myid:1] - INFO [main:QuorumPeer@1039] - minSessionTimeout set to -1 [myid:1] - INFO [main:QuorumPeer@1050] - maxSessionTimeout set to -1 [myid:1] - INFO [main:QuorumPeer@1065] - initLimit set to 5 [myid:1] - INFO [main:FileSnap@83] - Reading snapshot /run/shm/dice-4636/113-98-129-z_majority_RO_OM_0=60_1=55/rdir-0/version-2/snapshot.100000002 [myid:1] - ERROR [main:QuorumPeer@557] - Unable to load database on disk java.io.IOException: The accepted epoch, 1 is less than the current epoch, 2 at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:554) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) 2016-04-15 03:24:57,144 [myid:1] - ERROR [main:QuorumPeerMain@89] - Unexpected exception, exiting abnormally java.lang.RuntimeException: Unable to run quorum server at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:558) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) Caused by: java.io.IOException: The accepted epoch, 1 is less than the current epoch, 2 at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:554) ... 4 more Similarly, when new log file is created, the parent directory needs be explicitly fsynced to persist the log file. Otherwise a data loss might be possible (We have reproduced the above issues). Please see this: https://www.quora.com/Linux/When-should-you-fsync-the-containing-directory-in-addition-to-the-file-itself and http://research.cs.wisc.edu/wind/Publications/alice-osdi14.pdf. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks, 1 day ago | 0|i33cvb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2561 | CLONE - Possible Cluster Unavailability |
Bug | Open | Major | Unresolved | Unassigned | Athyab Ameer | Athyab Ameer | 07/Sep/16 13:40 | 07/Sep/16 13:40 | 3.4.8 | server | 0 | 1 | ZOOKEEPER-2560 | Three node linux cluster | Possible Cluster Unvailability I am running a three node ZooKeeper cluster. Each node runs Linux. I see the below sequence of system calls when ZooKeeper appends a user data item to the log file. 1 write("/data/version-2/log.200000001", offset=65, count=12) 2 write("/data/version-2/log.200000001", offset=77, count=16323) 3 write("/data/version-2/log.200000001", offset=16400, count=4209) 4 write("/data/version-2/log.200000001", offset=20609, count=1) 5 fdatasync("/data//version-2/log.200000001") Now, a crash could happen just after operation 4 but before the final fdatasync. In this situation, the file system could persist the 4th operation and fail to persist the 3rd operation because of the crash and there is fsync in between them. In such cases, ZooKeeper server fails to start with the following messages in its log file: [myid:] - INFO [main:QuorumPeerConfig@103] - Reading configuration from: /tmp/zoo2.cfg [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.2 to address: /127.0.0.2 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.4 to address: /127.0.0.4 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.3 to address: /127.0.0.3 [myid:] - INFO [main:QuorumPeerConfig@331] - Defaulting to majority quorums [myid:1] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 [myid:1] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 [myid:1] - INFO [main:DatadirCleanupManager@101] - Purge task is not scheduled. [myid:1] - INFO [main:QuorumPeerMain@127] - Starting quorum peer [myid:1] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2182 [myid:1] - INFO [main:QuorumPeer@1019] - tickTime set to 2000 [myid:1] - INFO [main:QuorumPeer@1039] - minSessionTimeout set to -1 [myid:1] - INFO [main:QuorumPeer@1050] - maxSessionTimeout set to -1 [myid:1] - INFO [main:QuorumPeer@1065] - initLimit set to 5 [myid:1] - INFO [main:FileSnap@83] - Reading snapshot /data/version-2/snapshot.100000002 [myid:1] - ERROR [main:QuorumPeer@557] - Unable to load database on disk java.io.IOException: CRC check failed at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:635) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:510) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) 2016-04-15 04:00:32,795 [myid:1] - ERROR [main:QuorumPeerMain@89] - Unexpected exception, exiting abnormally java.lang.RuntimeException: Unable to run quorum server at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:558) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) Caused by: java.io.IOException: CRC check failed at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:635) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:510) ... 4 more The same happens when the 3rd and 4th writes hit the disk but the 2nd operation does not. Now, two nodes of a three node cluster can easily reach this state, rendering the entire cluster unavailable. ZooKeeper, on recovery should be able to handle such checksum mismatches gracefully to maintain cluster availability. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks, 1 day ago | 0|i33cpz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2560 | Possible Cluster Unavailability |
Bug | Open | Major | Unresolved | Unassigned | Ramnatthan Alagappan | Ramnatthan Alagappan | 07/Sep/16 11:12 | 07/Sep/16 13:40 | 3.4.8 | server | 0 | 2 | ZOOKEEPER-2561 | Three node linux cluster | Possible Cluster Unvailability I am running a three node ZooKeeper cluster. Each node runs Linux. I see the below sequence of system calls when ZooKeeper appends a user data item to the log file. 1 write("/data/version-2/log.200000001", offset=65, count=12) 2 write("/data/version-2/log.200000001", offset=77, count=16323) 3 write("/data/version-2/log.200000001", offset=16400, count=4209) 4 write("/data/version-2/log.200000001", offset=20609, count=1) 5 fdatasync("/data//version-2/log.200000001") Now, a crash could happen just after operation 4 but before the final fdatasync. In this situation, the file system could persist the 4th operation and fail to persist the 3rd operation because of the crash and there is fsync in between them. In such cases, ZooKeeper server fails to start with the following messages in its log file: [myid:] - INFO [main:QuorumPeerConfig@103] - Reading configuration from: /tmp/zoo2.cfg [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.2 to address: /127.0.0.2 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.4 to address: /127.0.0.4 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.3 to address: /127.0.0.3 [myid:] - INFO [main:QuorumPeerConfig@331] - Defaulting to majority quorums [myid:1] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 [myid:1] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 [myid:1] - INFO [main:DatadirCleanupManager@101] - Purge task is not scheduled. [myid:1] - INFO [main:QuorumPeerMain@127] - Starting quorum peer [myid:1] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2182 [myid:1] - INFO [main:QuorumPeer@1019] - tickTime set to 2000 [myid:1] - INFO [main:QuorumPeer@1039] - minSessionTimeout set to -1 [myid:1] - INFO [main:QuorumPeer@1050] - maxSessionTimeout set to -1 [myid:1] - INFO [main:QuorumPeer@1065] - initLimit set to 5 [myid:1] - INFO [main:FileSnap@83] - Reading snapshot /data/version-2/snapshot.100000002 [myid:1] - ERROR [main:QuorumPeer@557] - Unable to load database on disk java.io.IOException: CRC check failed at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:635) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:510) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) 2016-04-15 04:00:32,795 [myid:1] - ERROR [main:QuorumPeerMain@89] - Unexpected exception, exiting abnormally java.lang.RuntimeException: Unable to run quorum server at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:558) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) Caused by: java.io.IOException: CRC check failed at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:635) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:510) ... 4 more The same happens when the 3rd and 4th writes hit the disk but the 2nd operation does not. Now, two nodes of a three node cluster can easily reach this state, rendering the entire cluster unavailable. ZooKeeper, on recovery should be able to handle such checksum mismatches gracefully to maintain cluster availability. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks, 1 day ago | 0|i33cdj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2559 | the quota should be deleted when its parasitic path doesn't exist |
Bug | Open | Major | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 07/Sep/16 08:14 | 01/Jul/19 03:06 | 3.5.1, 3.5.2 | server | 0 | 1 | 0 | 1200 | Failed to delete the set quota for ephemeral node when the node is deleted because of client session closed [zk: localhost:2181(CONNECTED) 0] create -e /e_test hello Created /e_test [zk: localhost:2181(CONNECTED) 1] setquota -n 10 /e_test [zk: localhost:2181(CONNECTED) 2] listquota /e_test absolute path is /zookeeper/quota/e_test/zookeeper_limits Output quota for /e_test count=10,bytes=-1 Output stat for /e_test count=1,bytes=5 Now close the client connection and so the ephemeral node gets deleted. But the corresponding quota is not getting deleted as below:- [zk: localhost:2181(CONNECTED) 0] ls / [test, test1, test3, zookeeper] [zk: localhost:2181(CONNECTED) 1] listquota /e_test absolute path is /zookeeper/quota/e_test/zookeeper_limits Output quota for /e_test count=10,bytes=-1 Output stat for /e_test count=0,bytes=0 [zk: localhost:2181(CONNECTED) 2] and so now again create the ephemeral node with same node and try to set the quota, it will fail. [zk: localhost:2181(CONNECTED) 2] create -e /e_test hello Created /e_test [zk: localhost:2181(CONNECTED) 3] setquota -n 10 /e_test Command failed: java.lang.IllegalArgumentException: /e_test has a parent /zookeeper/quota/e_test which has a quota [zk: localhost:2181(CONNECTED) 4] |
100% | 100% | 1200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks, 1 day ago | 0|i33bzz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2558 | Potential memory leak in recordio.c |
Bug | Closed | Minor | Fixed | Michael Han | Michael Han | Michael Han | 06/Sep/16 18:57 | 31/Mar/17 05:01 | 07/Sep/16 22:59 | 3.4.9, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | c client | 0 | 3 | We have code like this in {{create_buffer_iarchive}} and {{create_buffer_oarchive}}: {code} struct iarchive *ia = malloc(sizeof(*ia)); struct buff_struct *buff = malloc(sizeof(struct buff_struct)); if (!ia) return 0; if (!buff) { free(ia); return 0; } {code} If first malloc failed but second succeeds, then the memory allocated with second malloc will not get freed when the function returned. One could argue that if first malloc failed the second will also fail (i.e. when system run out of memory), but I could also see the possibility of the opposite (the first malloc failed because heap fragmentation but the second succeeds). |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 28 weeks ago |
Reviewed
|
0|i33b4n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2557 | Update gitignore to account for other file extensions |
Improvement | Closed | Trivial | Fixed | Edward Ribeiro | Edward Ribeiro | Edward Ribeiro | 06/Sep/16 11:44 | 31/Mar/17 05:01 | 08/Sep/16 17:40 | 3.4.8 | 3.4.10, 3.5.3, 3.6.0 | 1 | 6 | INFRA-12573 | We are in the process of moving from subversion to git, but I have seen that the current ZK's {{gitignore}} doesn't account for many spurious types of files (e.g., *.swp, *.tmp) as well as other files created by IDEs (Eclipse, Intellij and NetBeans), among other file extensions. | easyfix | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 28 weeks ago |
Reviewed
|
0|i33aen: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2556 | peerType remains as "observer" in zoo.cfg even though we change the node from observer to participant runtime |
Bug | Resolved | Minor | Fixed | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 06/Sep/16 06:43 | 03/Nov/16 12:48 | 03/Nov/16 11:41 | 3.5.1, 3.5.2 | server | 0 | 5 | peerType remains as "observer" in zoo.cfg even though we change the node from observer to participant runtime Steps to reproduce:- 1. Start zookeeper in cluster with one node as observer by configuring peerType=observer in zoo.cfg and server.2=10.18.219.50:2888:3888:observer;2181 2. Start the cluster 3. start a client and change the node from observer to participant, the configuration related to peertype remained same though other things like clientport got from zoo.cfg >reconfig -remove 2 -add 2=10.18.219.50:2888:3888:participant;2181 We should either remove this parameter or update with correct node type at run time |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 20 weeks ago | Committed. Thanks again Rakesh! | 0|i339xz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2555 | zookeeper started in observer mode takes long time to come to observer state once leader restarted and so client connected to observer mode has to wait for longer time to get service |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 06/Sep/16 05:59 | 06/Sep/16 06:00 | 3.5.1 | server | 0 | 1 | zookeeper started in observer mode takes long time (some times 25 seconds) to come to observer state once leader restarted and so client connected to observer mode has to wait for longer time to get service Steps to reproduce:- 1. Start zookeeper in cluster mode in which one node is in observer mode 2. stop the leader node (some times we need to wait for 30 secs to reproduce this issue) 3. Start the leader node 4. Check the observer node status - It will be in "Error contacting service. It is probably not running." and takes long time (25 secs) to come to observer mode. And hence client connected to this node will not get service during this time. Log at observer node is as below:- 2016-09-06 17:49:14,774 [myid:2] - WARN [WorkerSender[myid=2]:QuorumCnxManager@459] - Cannot open channel to 3 at election address /10.18.221.194:3888 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:444) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:485) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:421) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:722) {color:red}2016-09-06 17:49:14,776 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x2a00000001 (n.zxid), 0x7 (n.round), LOOKING (n.state), 1 (n.sid), 0x2b (n.peerEPoch), LOOKING (my state)100000000 (n.config version) 2016-09-06 17:49:40,377 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FastLeaderElection@928] - Notification time out: 51200 {color} 2016-09-06 17:49:40,378 [myid:2] - INFO [WorkerSender[myid=2]:QuorumCnxManager@278] - Have smaller server identifier, so dropping the connection: (3, 2) 2016-09-06 17:49:40,379 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x2a00000001 (n.zxid), 0x7 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x2c (n.peerEPoch), LOOKING (my state)100000000 (n.config version) 2016-09-06 17:49:40,381 [myid:2] - INFO [/10.18.219.50:3888:QuorumCnxManager$Listener@637] - Received connection request /10.18.221.194:34085 2016-09-06 17:49:40,388 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x2a00000001 (n.zxid), 0x7 (n.round), LEADING (n.state), 3 (n.sid), 0x2c (n.peerEPoch), LOOKING (my state)100000000 (n.config version) 2016-09-06 17:49:40,388 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):MBeanRegistry@119] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=LeaderElection] 2016-09-06 17:49:40,389 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):QuorumPeer@1049] - OBSERVING 2016-09-06 17:49:40,389 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ZooKeeperServer@858] - minSessionTimeout set to 4000 2016-09-06 17:49:40,389 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ZooKeeperServer@867] - maxSessionTimeout set to 40000 2016-09-06 17:49:40,389 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ZooKeeperServer@156] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/data/version-2 snapdir /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/data/version-2 2016-09-06 17:49:40,389 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ObserverZooKeeperServer@56] - syncEnabled =true 2016-09-06 17:49:40,389 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Observer@72] - Observing /10.18.221.194:2888 2016-09-06 17:49:40,396 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FileSnap@83] - Reading snapshot /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/data/version-2/snapshot.2a00000001 2016-09-06 17:49:40,410 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Learner@369] - Getting a snapshot from leader 2016-09-06 17:49:40,411 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Learner@509] - Learner received NEWLEADER message 2016-09-06 17:49:40,411 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FileTxnSnapLog@298] - Snapshotting: 0x2c00000000 to /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/data/version-2/snapshot.2c00000000 2016-09-06 17:49:40,417 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Learner@493] - Learner received UPTODATE message 2016-09-06 17:49:40,417 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):CommitProcessor@254] - Configuring CommitProcessor with 8 worker threads. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks, 2 days ago | 0|i339wf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2554 | reconfig can not add new server as observer |
Bug | Resolved | Critical | Not A Problem | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 05/Sep/16 15:14 | 06/Sep/16 14:22 | 06/Sep/16 13:14 | 3.5.0, 3.5.1, 3.5.2 | server | 0 | 3 | Try to add new observer server using reconfig API, server gets added as participant. STEPS: # create 3 node cluster. {code} server.0=127.0.0.1:11223:11224:participant;127.0.0.1:11222 server.1=127.0.0.1:11226:11227:participant;127.0.0.1:11225 server.2=127.0.0.1:11229:11230:participant;127.0.0.1:11228 {code} # Suppose the 2 is the leader in the above cluster. Configure the new server as {code} server.2=127.0.0.1:11229:11230:participant;127.0.0.1:11228 server.3=127.0.0.1:11232:11233:observer;127.0.0.1:11231 {code} # Connect to 1 and execute the reconfig command {code} zkClient.reconfig("server.3=127.0.0.1:11232:11233:observer;127.0.0.1:11231", null, null, -1, null, null); {code} # Verify sever 3. It was supposed to run as observer but it is running as participant |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 28 weeks, 2 days ago | 0|i339af: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2553 | ZooKeeper cluster unavailable due to corrupted log file during power failures -- java.io.IOException: Unreasonable length |
Bug | Open | Major | Unresolved | Unassigned | Ramnatthan Alagappan | Ramnatthan Alagappan | 05/Sep/16 13:20 | 15/Nov/19 08:38 | 3.4.8 | server | 1 | 4 | Normal ZooKeeper cluster with 3 nodes running Linux | I am running a three node ZooKeeper cluster. When a new log file is created by ZooKeeper, I see the following sequence of system calls: 1. creat(new_log) 2. write(new_log, count=16) // This is a log header I believe/ 3. truncate(new_log, from 16 bytes to 16 KBytes) // I have configured the log size to be 16K. When the above sequence of operations complete, it is reasonable to expect the newly created log file to contain the header(16 bytes) and then filled with zeros till the end of the log. But when a crash occurs (due to a power failure), while the truncate system call is in progress, it is possible for the log to contain garbage data when the system restarts from the crash. Note that if the crash occurs just after the truncate system call completes, then there is no problem. Basically, the truncate needs to be atomically persisted for ZooKeeper to recover from crashes correctly or (more realistically) the recovery code needs to deal with the case of expecting garbage in a newly created log. As mentioned, if a crash occurs during the truncate system call, then ZooKeeper will fail to start with the following exception. Here is the stack trace: java.io.IOException: Unreasonable length = -295704495 at org.apache.jute.BinaryInputArchive.checkLength(BinaryInputArchive.java:127) at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:92) at org.apache.zookeeper.server.persistence.Util.readTxnBytes(Util.java:233) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:652) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.init(FileTxnLog.java:552) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.<init>(FileTxnLog.java:527) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:354) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:132) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:510) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) [myid:1] - ERROR [main:QuorumPeerMain@89] - Unexpected exception, exiting abnormally java.lang.RuntimeException: Unable to run quorum server at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:558) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:500) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:153) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) Caused by: java.io.IOException: Unreasonable length = -295704495 at org.apache.jute.BinaryInputArchive.checkLength(BinaryInputArchive.java:127) at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:92) at org.apache.zookeeper.server.persistence.Util.readTxnBytes(Util.java:233) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:652) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.init(FileTxnLog.java:552) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.<init>(FileTxnLog.java:527) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:354) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:132) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:510) ... 4 more Next, it is possible for two nodes of a 3-node ZooKeeper cluster to reach the same state. In that case, they both will fail to startup, rendering the entire cluster unavailable. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 17 weeks, 6 days ago | 0|i3397z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2552 | Revisit release note doc and remove the items which are not related to the released version |
Bug | Closed | Major | Fixed | Edward Ribeiro | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 05/Sep/16 08:43 | 12/May/17 21:05 | 15/Dec/16 05:14 | 3.4.9 | 3.4.10 | 0 | 4 | Couple of issues listed on http://zookeeper.apache.org/ doc/r3.4.9/releasenotes.html that are either 'Open' or 'Patch available'. For example, issues were wrongly marked as "3.4.8" fix version in jira and has caused the trouble. This jira to cross check all the jira issues present in the release note and check the correctness. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 2 years, 44 weeks, 5 days ago | 0|i338zb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2551 | Remove Hadoop Logo from ZooKeeper documentation |
Bug | Patch Available | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 05/Sep/16 03:01 | 05/Feb/20 07:11 | 3.7.0, 3.5.8 | documentation | 0 | 1 | ZooKeeper documentation has hadoop logo on each page's header. There is no significance to put the hadoop logo on ZooKeeper project. So hadoop logo should be removed from Zookeeper as ZooKeeper is independent of hadoop project, | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 45 weeks ago | 0|i338nr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2550 | FollowerResyncConcurrencyTest failed in ZooKeeper 3.3.3 |
Bug | Open | Blocker | Unresolved | Unassigned | KangYin | KangYin | 05/Sep/16 02:05 | 08/Sep/16 01:36 | 3.3.3 | leaderElection, quorum, server, tests | 0 | 1 | Windows 10, Java 1.8.0, IDEA 2016.1.4, JUnit 4.8.1 |
I'm studying on the Test of ZooKeeper 3.3.3 but got a test failure when I run _testResyncBySnapThenDiffAfterFollowerCrashes_ in _FollowerResyncConcurrencyTest.java_. {quote} 2016-09-05 13:57:35,072 - INFO [main:QuorumBase@307] - FINISHED testResyncBySnapThenDiffAfterFollowerCrashes java.util.concurrent.TimeoutException: Did not connect at org.apache.zookeeper.test.ClientBase$CountdownWatcher.waitForConnected(ClientBase.java:119) at org.apache.zookeeper.test.FollowerResyncConcurrencyTest.testResyncBySnapThenDiffAfterFollowerCrashes(FollowerResyncConcurrencyTest.java:95) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:232) at junit.framework.TestSuite.run(TestSuite.java:227) at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) at org.junit.runner.JUnitCore.run(JUnitCore.java:157) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:119) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:42) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) {quote} Which happened in _FollowerResyncConcurrencyTest.java_ at line 92. {quote} index = (index == 1) ? 2 : 1; qu.shutdown(index); final ZooKeeper zk3 = new DisconnectableZooKeeper("127.0.0.1:" + qu.getPeer(3).peer.getClientPort(), 1000,watcher3); {color:red}watcher3.waitForConnected(CONNECTION_TIMEOUT);{color} zk3.create("/mybar", null, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); {quote} I checked the Log Message, and I guess it is probably because of the following ERROR (marked as blue): {quote} 2016-09-05 13:56:54,928 - INFO [main-SendThread():ClientCnxn$SendThread@1041] - Opening socket connection to server /127.0.0.1:11237 2016-09-05 13:56:54,930 - INFO [main-SendThread(127.0.0.1:11237):ClientCnxn$SendThread@949] - Socket connection established to 127.0.0.1/127.0.0.1:11237, initiating session 2016-09-05 13:56:54,930 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11237:NIOServerCnxn$Factory@251] - Accepted socket connection from /127.0.0.1:33566 2016-09-05 13:56:54,957 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11237:NIOServerCnxn@777] - Client attempting to establish new session at /127.0.0.1:33566 {color:blue} 2016-09-05 13:56:55,000 - INFO [SyncThread:3:FileTxnLog@197] - Creating new log file: log.100000001 2016-09-05 13:56:55,000 - WARN [QuorumPeer:/0:0:0:0:0:0:0:0:11235:Follower@116] - Got zxid 0x100000001 expected 0x1 2016-09-05 13:56:55,000 - INFO [SyncThread:2:FileTxnLog@197] - Creating new log file: log.100000001 2016-09-05 13:56:55,078 - ERROR [CommitProcessor:3:CommitProcessor@146] - Unexpected exception causing CommitProcessor to exit java.lang.AssertionError at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:66) at org.apache.zookeeper.server.NIOServerCnxn.finishSessionInit(NIOServerCnxn.java:1552) at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:183) at org.apache.zookeeper.server.quorum.Leader$ToBeAppliedRequestProcessor.processRequest(Leader.java:540) at org.apache.zookeeper.server.quorum.CommitProcessor.run(CommitProcessor.java:73) 2016-09-05 13:56:55,078 - INFO [CommitProcessor:3:CommitProcessor@148] - CommitProcessor exited loop! {color} 2016-09-05 13:56:55,931 - INFO [main-SendThread(127.0.0.1:11237):ClientCnxn$SendThread@1157] - Client session timed out, have not heard from server in 1001ms for sessionid 0x0, closing socket connection and attempting reconnect 2016-09-05 13:56:58,035 - INFO [main-SendThread(127.0.0.1:11237):ClientCnxn$SendThread@1041] - Opening socket connection to server 127.0.0.1/127.0.0.1:11237 2016-09-05 13:56:58,036 - INFO [main-SendThread(127.0.0.1:11237):ClientCnxn$SendThread@949] - Socket connection established to 127.0.0.1/127.0.0.1:11237, initiating session {quote} I'll very appreciate it if I can get some help from you genius people. Thanks. |
test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks ago | 0|i338mf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2549 | As NettyServerCnxn.sendResponse() allows all the exception to bubble up it can stop main ZK requests processing thread |
Bug | Patch Available | Major | Unresolved | Yuliya Feldman | Yuliya Feldman | Yuliya Feldman | 03/Sep/16 02:39 | 25/Dec/19 21:26 | 3.5.1 | 3.7.0 | server | 0 | 5 | 0 | 600 | ZOOKEEPER-1364 | As NettyServerCnxn.sendResponse() allows all the exception to bubble up it can stop main ZK requests processing thread and make Zookeeper server look like it is hanging, while it just can not process any request anymore. Idea is to catch all the exceptions in NettyServerCnxn.sendResponse() , convert them to IOException and allow it propagating up |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 8 | 9223372036854775807 | 12 weeks ago | 0|i337rj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2548 | zooInspector does not start on Windows |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 02/Sep/16 16:23 | 17/May/17 23:44 | 07/Sep/16 18:14 | 3.5.3, 3.6.0 | contrib | 0 | 3 | ZooInspector is very usefully tool but seems its windows scripts are not maintained. zooInspector.cmd commands fails with bellow error: {noformat} D:\workspace\ZooInspector>zooInspector.cmd D:\workspace\ZooInspector>#!/bin/sh '#!' is not recognized as an internal or external command, operable program or batch file. D:\workspace\ZooInspector># Licensed to the Apache Software Foundation (ASF) under one or more '#' is not recognized as an internal or external command, operable program or batch file. {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 28 weeks, 1 day ago |
Reviewed
|
0|i337ef: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2547 | IP ACL is not working with NettyServerCnxnFactory |
Bug | Patch Available | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 02/Sep/16 13:16 | 05/Feb/20 07:11 | 3.5.0 | 3.7.0, 3.5.8 | 1 | 3 | IP based ACL is not working with NettyServerCnxnFactory. Scenario: 1) Configure serverCnxnFactory= org.apache.zookeeper.server.NettyServerCnxnFactory and start ZooKeeper server 2) Create a znode "/n" with ACL(ZooDefs.Perms.ALL, new Id("ip", "127.0.0.1/8") 3) Create child node /n/n1. Child node creation fails. But the same above scenario works with NIOServerCnxnFactory |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 26 weeks, 5 days ago | 0|i3372v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2546 | Started throwing "Error Path:null Error:KeeperErrorCode = ReconfigInProgress" error when trying to change the cluster using reconfig and the IO hangged at one node |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 02/Sep/16 07:56 | 02/Sep/16 07:56 | 3.5.1 | server | 0 | 1 | Started throwing "Error Path:null Error:KeeperErrorCode = ReconfigInProgress" error when trying to change the cluster using reconfig and the IO hangged at one node. Steps:- 1. Start Zookeeper in cluster mode 2. try to reconfig the cluster using "reconfig" command from one node's client (194) like "reconfig -remove 3 -add 3=10.18.221.194:2888:3888;2181 3. make the IO busy for 5-10 secs at 194 node and then release 4. Again execute the above reconfig command It is failing to execute even after 3-4 mins. Server log is attached. (Complete server log is attached) 2016-09-02 18:12:05,845 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):QuorumPeer@1074] - LEADING 2016-09-02 18:12:05,848 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Leader@63] - TCP NoDelay set to: true 2016-09-02 18:12:05,848 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Leader@83] - zookeeper.leader.maxConcurrentSnapshots = 10 2016-09-02 18:12:05,848 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Leader@85] - zookeeper.leader.maxConcurrentSnapshotTimeout = 5 2016-09-02 18:12:05,849 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ZooKeeperServer@858] - minSessionTimeout set to 4000 2016-09-02 18:12:05,849 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ZooKeeperServer@867] - maxSessionTimeout set to 40000 2016-09-02 18:12:05,849 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ZooKeeperServer@156] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/data/version-2 snapdir /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/data/version-2 2016-09-02 18:12:05,850 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Leader@412] - LEADING - LEADER ELECTION TOOK - 5 2016-09-02 18:12:05,852 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FileTxnSnapLog@298] - Snapshotting: 0x100000001 to /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/data/version-2/snapshot.100000001 2016-09-02 18:12:06,854 [myid:2] - INFO [LearnerHandler-/10.18.101.80:55632:LearnerHandler@382] - Follower sid: 1 : info : 10.18.101.80:2888:3888:participant;0.0.0.0:2181 2016-09-02 18:12:06,869 [myid:2] - INFO [LearnerHandler-/10.18.101.80:55632:LearnerHandler@683] - Synchronizing with Follower sid: 1 maxCommittedLog=0x100000001 minCommittedLog=0x100000001 lastProcessedZxid=0x100000001 peerLastZxid=0x100000001 2016-09-02 18:12:06,869 [myid:2] - INFO [LearnerHandler-/10.18.101.80:55632:LearnerHandler@727] - Sending DIFF zxid=0x100000001 for peer sid: 1 2016-09-02 18:12:06,888 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Leader@1245] - Have quorum of supporters, sids: [ [1, 2] ]; starting up and setting last processed zxid: 0x200000000 2016-09-02 18:12:06,890 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):CommitProcessor@254] - Configuring CommitProcessor with 8 worker threads. 2016-09-02 18:12:06,898 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ContainerManager@64] - Using checkIntervalMs=60000 maxPerMinute=10000 2016-09-02 18:12:18,886 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0xffffffffffffffff (n.round), LEADING (n.state), 3 (n.sid), 0x1 (n.peerEPoch), LEADING (my state)200000028 (n.config version) 2016-09-02 18:13:47,869 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@512] - Incremental reconfig 2016-09-02 18:13:47,872 [myid:2] - ERROR [ProcessThread(sid:2 cport:-1)::QuorumPeer@1383] - setLastSeenQuorumVerifier called with stale config 8589934593. Current version: 8589934632 2016-09-02 18:14:15,545 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@843] - Got user-level KeeperException when processing sessionid:0x1000aa5ce650000 type:reconfig cxid:0x3 zxid:0x200000002 txntype:-1 reqpath:n/a Error Path:null Error:KeeperErrorCode = ReconfigInProgress 2016-09-02 18:14:56,442 [myid:2] - INFO [NIOServerCxnFactory.AcceptThread:/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /10.18.219.50:48388 2016-09-02 18:14:56,454 [myid:2] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing srvr command from /10.18.219.50:48388 2016-09-02 18:14:56,467 [myid:2] - INFO [NIOWorkerThread-1:NIOServerCnxn@606] - Closed socket connection for client /10.18.219.50:48388 (no session established for client) 2016-09-02 18:17:18,365 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@843] - Got user-level KeeperException when processing sessionid:0x1000aa5ce650000 type:reconfig cxid:0x4 zxid:0x200000003 txntype:-1 reqpath:n/a Error Path:null Error:KeeperErrorCode = ReconfigInProgress 2016-09-02 18:19:23,699 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@843] - Got user-level KeeperException when processing sessionid:0x1000aa5ce650000 type:reconfig cxid:0x5 zxid:0x200000004 txntype:-1 reqpath:n/a Error Path:null Error:KeeperErrorCode = ReconfigInProgress |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks, 6 days ago | 0|i336jz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2545 | Keep maintaining the old zoo.cfg.dynamic* files which will keep eating system memory which is getting generated as part of reconfig execution |
Bug | Open | Major | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 02/Sep/16 02:53 | 23/Sep/16 01:08 | 3.5.1 | server | 0 | 1 | Keep maintaining the old zoo.cfg.dynamic* files which will be getting created every time when "reconfig" is executed Steps to reproduce:-1 1. Setup the zookeeper in cluster mode and start 2. trying running reconfig command like >reconfig -remove 3 -add 1=10.18.101.80:2888:3888;2181 3. It will create new zoo.cfg.dynamic in conf folder The problem is it is not deleting the old zoo.cfg.dynamic* files which will keep eating the memory |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks, 6 days ago | 0|i3362v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2544 | " unary operator expected" message on the console while trying to get the zookeeper status |
Bug | Open | Minor | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 02/Sep/16 01:45 | 08/Oct/17 12:41 | 3.5.1, 3.5.2 | server | 0 | 2 | root@BLR1000010865:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../conf/zoo.cfg ./zkServer.sh: line 223: [: =: unary operator expected |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 23 weeks, 4 days ago | 0|i3361b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2543 | Add C API for creating nodes with TTL |
New Feature | Resolved | Major | Duplicate | Unassigned | Michael Han | Michael Han | 02/Sep/16 00:37 | 19/Dec/19 18:01 | 08/Oct/16 14:20 | 3.6.0 | c client | 0 | 1 | ZOOKEEPER-2168, ZOOKEEPER-2609 | ZOOKEEPER-2169 introduces a new feature for creation nodes with TTL, which is supported by Java client with new Java API. Similar API should be added to C client as well. | ttl_nodes | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks, 6 days ago | 0|i335z3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2542 | Update NOTICE file with Netty notice in 3.4 |
Bug | Closed | Blocker | Fixed | Rakesh Radhakrishnan | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 31/Aug/16 13:40 | 31/Mar/17 05:01 | 12/Dec/16 18:04 | 3.4.9 | 3.4.10 | 0 | 4 | We need to update the NOTICE file in the 3.4 branch as we did for 3.5 and trunk, see ZOOKEEPER-2459. | newbie | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 7 weeks, 2 days ago | 0|i3331j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2541 | We can't configure the "secureClientPort" in dynamicConfiguration and connect through client |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 30/Aug/16 01:58 | 30/Aug/16 01:58 | 3.5.1 | server | 1 | 3 | We can't configure the "secureClientPort" in dynamicConfiguration and connect through client Steps to reproduce:- 1. Configure the zookeeper in cluster mode with SSL mode 2. comment the clientport and secureClientport details from zoo.cfg file 3. Configure the secureClientport in dynamicConfiguration as below:- server.1=10.18.101.80:2888:3888:participant;2181 server.2=10.18.219.50:2888:3888:participant;2181 server.3=10.18.221.194:2888:3888:participant;2181 4. Start the cluster 5. Start one client using zkCli.sh and try to connect to any one of the cluster, it fails Client log:- BLR1000007042:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin # ./zkCli.sh /usr/bin/java Connecting to localhost:2181 2016-08-30 13:42:33,574 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.1-alpha--1, built on 08/18/2016 08:20 GMT 2016-08-30 13:42:33,578 [myid:] - INFO [main:Environment@109] - Client environment:host.name=BLR1000007042 2016-08-30 13:42:33,578 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.7.0_79 2016-08-30 13:42:33,581 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation 2016-08-30 13:42:33,581 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/java/jdk1.7.0_79/jre 2016-08-30 13:42:33,581 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../build/classes:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../build/lib/*.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/slf4j-log4j12-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/slf4j-api-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/servlet-api-2.5-20081211.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/netty-3.7.0.Final.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/log4j-1.2.16.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jline-2.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jetty-util-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jetty-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/javacc.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jackson-mapper-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jackson-core-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/commons-cli-1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../zookeeper-3.5.1-alpha.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../src/java/lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../conf: 2016-08-30 13:42:33,582 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-08-30 13:42:33,582 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp 2016-08-30 13:42:33,582 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA> 2016-08-30 13:42:33,582 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux 2016-08-30 13:42:33,582 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64 2016-08-30 13:42:33,583 [myid:] - INFO [main:Environment@109] - Client environment:os.version=3.0.76-0.11-default 2016-08-30 13:42:33,583 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root 2016-08-30 13:42:33,583 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root 2016-08-30 13:42:33,583 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin 2016-08-30 13:42:33,583 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=52MB 2016-08-30 13:42:33,586 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=227MB 2016-08-30 13:42:33,587 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=57MB 2016-08-30 13:42:33,591 [myid:] - INFO [main:ZooKeeper@716] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@272f15b0 Welcome to ZooKeeper! 2016-08-30 13:42:33,681 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1138] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) JLine support is enabled [INFO] Unable to bind key for unsupported operation: backward-delete-word [INFO] Unable to bind key for unsupported operation: backward-delete-word [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [zk: localhost:2181(CONNECTING) 0] 2016-08-30 13:42:33,975 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxnSocketNetty$ZKClientPipelineFactory@363] - SSL handler added for channel: null 2016-08-30 13:42:34,004 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@980] - Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:47374, server: localhost/0:0:0:0:0:0:0:1:2181 2016-08-30 13:42:34,006 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxnSocketNetty$1@146] - channel is connected: [id: 0xd4aaee7b, /0:0:0:0:0:0:0:1:47374 => localhost/0:0:0:0:0:0:0:1:2181] 2016-08-30 13:42:34,030 [myid:] - INFO [New I/O worker #1:ClientCnxnSocketNetty$ZKClientHandler@377] - channel is disconnected: [id: 0xd4aaee7b, /0:0:0:0:0:0:0:1:47374 :> localhost/0:0:0:0:0:0:0:1:2181] 2016-08-30 13:42:34,030 [myid:] - INFO [New I/O worker #1:ClientCnxnSocketNetty@201] - channel is told closing 2016-08-30 13:42:34,030 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1253] - channel for sessionid 0x0 is lost, closing socket connection and attempting reconnect 2016-08-30 13:42:34,033 [myid:] - WARN [New I/O worker #1:ClientCnxnSocketNetty$ZKClientHandler@432] - Exception caught: [id: 0xd4aaee7b, /0:0:0:0:0:0:0:1:47374 :> localhost/0:0:0:0:0:0:0:1:2181] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at org.jboss.netty.handler.ssl.SslHandler$6.run(SslHandler.java:1580) at org.jboss.netty.channel.socket.ChannelRunnableWrapper.run(ChannelRunnableWrapper.java:40) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:71) at org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:57) at org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36) at org.jboss.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34) at org.jboss.netty.handler.ssl.SslHandler.channelClosed(SslHandler.java:1566) at org.jboss.netty.channel.Channels.fireChannelClosed(Channels.java:468) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:376) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:93) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-08-30 13:42:34,230 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1138] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2016-08-30 13:42:34,240 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxnSocketNetty$ZKClientPipelineFactory@363] - SSL handler added for channel: null 2016-08-30 13:42:34,241 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@980] - Socket connection established, initiating session, client: /127.0.0.1:60295, server: localhost/127.0.0.1:2181 Server log (It is not starting in secureMode though all the required configuration is done for ssl except secureClientport which is configured in dynamicConfiguration):- 2016-08-30 13:40:13,436 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0.0.0.0:2181)(secure=disabled):FastLeaderElection@928] - Notification time out: 800 2016-08-30 13:40:14,239 [myid:1] - WARN [QuorumPeer[myid=1](plain=/0.0.0.0:2181)(secure=disabled):QuorumCnxManager@459] - Cannot open channel to 2 at election address /10.18.219.50:3888 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:444) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:485) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:513) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:919) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1040) 201 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 29 weeks, 2 days ago | 0|i32zlb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2540 | When start zookeeper server by configuring the server details in dynamic configuration with passing the client port, wrong log info is logged |
Bug | Open | Minor | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 30/Aug/16 01:16 | 23/Sep/16 01:09 | 3.5.1 | server | 0 | 1 | When start zookeeper server by configuring the server details in dynamic configuration with passing the client port, wrong log info is logged:- Configure the server details as below which contains client port as well and remove the client port from zoo.cfg (as it is duplicate) :- server.1=10.18.101.80:2888:3888:participant;2181 server.2=10.18.219.50:2888:3888:participant;2181 server.3=10.18.221.194:2888:3888:participant;2181 Start the cluster, we can see message as 2016-08-30 17:00:33,984 [myid:] - INFO [main:QuorumPeerConfig@306] - clientPort is not set which is not correct |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 29 weeks, 2 days ago | 0|i32zjz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2539 | Throwing nullpointerException when run the command "config -c" when client port is mentioned as separate and not like new style |
Bug | Closed | Minor | Fixed | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 29/Aug/16 10:09 | 17/May/17 23:43 | 08/Sep/16 10:22 | 3.5.1, 3.5.2 | 3.5.3, 3.6.0 | java client | 0 | 4 | Throwing nullpointerException when run the command "config -c" when client port is mentioned as separate and not like new style 1. Configure the zookeeper to start in cluster mode like below- clientPort=2181 server.1=10.18.101.80:2888:3888 server.2=10.18.219.50:2888:3888 server.3=10.18.221.194:2888:3888 and not like below:- server.1=10.18.101.80:2888:3888:participant;2181 server.2=10.18.219.50:2888:3888:participant;2181 server.3=10.18.221.194:2888:3888:participant;2181 2. Start the cluster and one client using >zkCli.sh 3. execute command "config -c" It is throwing nullpointerException:- root@BLR1000010865:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin# ./zkCli.sh Connecting to localhost:2181 2016-08-29 21:45:19,558 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.1-alpha--1, built on 08/18/2016 08:20 GMT 2016-08-29 21:45:19,561 [myid:] - INFO [main:Environment@109] - Client environment:host.name=BLR1000010865 2016-08-29 21:45:19,562 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.7.0_17 2016-08-29 21:45:19,564 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation 2016-08-29 21:45:19,564 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/oracle_jdk7/jre 2016-08-29 21:45:19,564 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../build/classes:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../build/lib/*.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/slf4j-log4j12-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/slf4j-api-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/servlet-api-2.5-20081211.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/netty-3.7.0.Final.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/log4j-1.2.16.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jline-2.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jetty-util-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jetty-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/javacc.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jackson-mapper-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/jackson-core-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/commons-cli-1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../zookeeper-3.5.1-alpha.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../src/java/lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin/../conf: 2016-08-29 21:45:19,564 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-08-29 21:45:19,564 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp 2016-08-29 21:45:19,564 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA> 2016-08-29 21:45:19,565 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux 2016-08-29 21:45:19,565 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64 2016-08-29 21:45:19,565 [myid:] - INFO [main:Environment@109] - Client environment:os.version=4.4.0-31-generic 2016-08-29 21:45:19,565 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root 2016-08-29 21:45:19,565 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root 2016-08-29 21:45:19,565 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin 2016-08-29 21:45:19,565 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=114MB 2016-08-29 21:45:19,567 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=227MB 2016-08-29 21:45:19,568 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=119MB 2016-08-29 21:45:19,570 [myid:] - INFO [main:ZooKeeper@716] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@149ee0f1 Welcome to ZooKeeper! 2016-08-29 21:45:19,596 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1138] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2016-08-29 21:45:19,603 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@980] - Socket connection established, initiating session, client: /127.0.0.1:43574, server: localhost/127.0.0.1:2181 JLine support is enabled 2016-08-29 21:45:19,630 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1400] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x20044a0c51d0000, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0] [zk: localhost:2181(CONNECTED) 0] config -c Exception in thread "main" java.lang.NullPointerException at org.apache.zookeeper.server.util.ConfigUtils.getClientConfigStr(ConfigUtils.java:56) at org.apache.zookeeper.cli.GetConfigCommand.exec(GetConfigCommand.java:64) at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:674) at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:577) at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:360) at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:320) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:280) root@BLR1000010865:/home/Rakesh/Zookeeper/18_Aug/cluster/zookeeper-3.5.1-alpha/bin# |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 28 weeks ago |
Reviewed
|
0|i32y8v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2538 | ZOOKEEPER-3170 Flaky Test: org.apache.zookeeper.server.quorum.Zab1_0Test.testNormalObserverRun |
Sub-task | Closed | Major | Cannot Reproduce | Andor Molnar | Michael Han | Michael Han | 26/Aug/16 17:15 | 19/Dec/19 18:01 | 25/Oct/18 10:46 | 3.5.2 | 3.5.5 | quorum, server, tests | 0 | 3 | ZOOKEEPER-2135, ZOOKEEPER-1798 | The failure of this test is fairly often. Might relate to ZOOKEEPER-1798. {noformat} org.apache.zookeeper.server.quorum.Zab1_0Test.testNormalObserverRun Failing for the past 1 build (Since Failed#1143 ) Took 0 ms. Error Message Timeout occurred. Please note the time in the report does not reflect the time until the timeout. Stacktrace junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout. at java.lang.Thread.run(Thread.java:745) {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 21 weeks ago | 0|i32vd3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2537 | When provide path for "dataDir" with heading space, it is taking correct path (by trucating space) for snapshot but zookeeper_server.pid is getting created in root (/) folder |
Bug | Closed | Major | Duplicate | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 26/Aug/16 05:08 | 19/Dec/19 18:02 | 12/Sep/16 18:13 | 3.5.1, 3.5.2 | 3.5.3 | server | 0 | 2 | ZOOKEEPER-2536 | Scenario 1 :- When provide path for "dataDir" with heading space, it is taking correct path (by trucating space) for snapshot but zookeeper_server.pid is getting created in root (/) folder Steps to reproduce:- 1. Configure the dataDir dataDir= /home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/data Here there is a space after dataDir= 2. Start Zookeeper Server 3. The snapshot is getting created at location mentioned above by truncating the heading space but zookeeper_server.pid is getting created at root (/) folder |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 27 weeks, 3 days ago | 0|i32u4n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2536 | When provide path for "dataDir" with trailing space, it is taking correct path (by trucating space) for snapshot but creating temporary file with some junk folder name for zookeeper_server.pid |
Bug | Closed | Major | Fixed | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 26/Aug/16 04:57 | 17/May/17 23:43 | 08/Sep/16 00:07 | 3.5.1, 3.5.2 | 3.5.3, 3.6.0 | server | 0 | 4 | ZOOKEEPER-2537 | Scenario 1:- When provide path for "dataDir" with trailing space, it is taking correct path (by trucating space) for snapshot but creating temporary file with some junk folder name for zookeeper_server.pid Steps to reproduce:- 1. Configure the dataDir dataDir=/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/data Here there is a space after /data 2. Start Zookeeper Server 3. The snapshot is getting created at location mentioned above by truncating the trailing space but one temp folder with junk name (like -> D29D4X~J) is getting created for zookeeper_server.pid Scenario 2:- When configure the heading and trailing space in above mentioned scenario. the temp folder is getting created in zookeeper/bin folder |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 28 weeks ago |
Reviewed
|
0|i32u3r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2535 | zookeeper_server's pid should be created once server is started completely and not before, problem with current approach |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 26/Aug/16 02:58 | 26/Aug/16 03:00 | 3.5.1 | server | 0 | 1 | zookeeper_server's pid should be created once server is started completely and not before, problem with current approach :- Scenario:- 1. Configure below in zoo.cfg dataDir=/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/data 2. Start zookeeper server 3. Change the dataDir to suppose dataDir=/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/data1 4. Again start zookeeper without stopping zookeeper. Though it will fail to start the server as port is already bind, it will create "zookeeper_server.pid" file with different PID inside and "version-2" folder Now again revertback the dataDir path and stop the server, the new created folder and file at step 4 remained |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 29 weeks, 6 days ago | 0|i32u0f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2534 | Configuration for IP address (zookeeper.server.ip=) should be introduced and zookeeper server should bind to that particular IP only not to all IPs of the system |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 26/Aug/16 01:46 | 26/Aug/16 01:47 | 3.5.1, 3.5.2 | security | 0 | 1 | Configuration for IP address (zookeeper.server.ip=) should be introduced and zookeeper server should bind to that particular IP only not to all IPs of the system. As of now zookeeper is binding to 0.0.0.0 (all IPs - 127.0.0.1, IPv4, IPv6) of the system. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 29 weeks, 6 days ago | 0|i32tx3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2533 | Close the zkCli using "close" command and then connect using "connect" then provide some invalid input, it closing the channel and connecting again |
Bug | Open | Minor | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 25/Aug/16 05:16 | 23/Sep/16 01:10 | 3.5.1 | java client | 0 | 1 | Close the zkCli using "close" command and then connect using "connect" then provide some invalid input, it closing the channel and connecting again Steps to reproduce:- 1. Connect the Zookeeper server using zkCli 2. close the connection using "close" 3. Connect again using "connect host" 4. Once connected, input space " " and hit enter It is closing the channel and establishing again. Console log is as below:- [zk: localhost:2181(CONNECTED) 5] close 2016-08-25 16:59:04,854 [myid:] - INFO [main:ClientCnxnSocketNetty@201] - channel is told closing 2016-08-25 16:59:04,855 [myid:] - INFO [main:ZooKeeper@1110] - Session: 0x101a00305cc0008 closed [zk: localhost:2181(CLOSED) 6] 2016-08-25 16:59:04,855 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@542] - EventThread shut down for session: 0x101a00305cc0008 2016-08-25 16:59:04,856 [myid:] - INFO [New I/O worker #1:ClientCnxnSocketNetty$ZKClientHandler@377] - channel is disconnected: [id: 0xd9735868, /0:0:0:0:0:0:0:1:44595 :> localhost/0:0:0:0:0:0:0:1:2181] 2016-08-25 16:59:04,856 [myid:] - INFO [New I/O worker #1:ClientCnxnSocketNetty@201] - channel is told closing connect 10.18.101.80 2016-08-25 16:59:14,410 [myid:] - INFO [main:ZooKeeper@716] - Initiating client connection, connectString=10.18.101.80 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@19c50523 [zk: 10.18.101.80(CONNECTING) 7] 2016-08-25 16:59:14,417 [myid:] - INFO [main-SendThread(10.18.101.80:2181):ClientCnxn$SendThread@1138] - Opening socket connection to server 10.18.101.80/10.18.101.80:2181. Will not attempt to authenticate using SASL (unknown error) 2016-08-25 16:59:14,426 [myid:] - INFO [main-SendThread(10.18.101.80:2181):ClientCnxnSocketNetty$ZKClientPipelineFactory@363] - SSL handler added for channel: null 2016-08-25 16:59:14,428 [myid:] - INFO [New I/O worker #10:ClientCnxn$SendThread@980] - Socket connection established, initiating session, client: /10.18.101.80:58871, server: 10.18.101.80/10.18.101.80:2181 2016-08-25 16:59:14,428 [myid:] - INFO [New I/O worker #10:ClientCnxnSocketNetty$1@146] - channel is connected: [id: 0xa8f6b724, /10.18.101.80:58871 => 10.18.101.80/10.18.101.80:2181] 2016-08-25 16:59:14,473 [myid:] - INFO [New I/O worker #10:ClientCnxn$SendThread@1400] - Session establishment complete on server 10.18.101.80/10.18.101.80:2181, sessionid = 0x101a00305cc0009, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks ago | 0|i32rxr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2532 | zkCli throwing nullpointerException instead it should provide help when invalid input is entered |
Bug | Patch Available | Minor | Unresolved | Rakesh Kumar Singh | Rakesh Kumar Singh | Rakesh Kumar Singh | 25/Aug/16 05:11 | 05/Feb/20 07:11 | 3.5.1, 3.5.2 | 3.7.0, 3.5.8 | java client | 1 | 4 | 1. a) Connect to zookeeper using zkCli b) just input space and then hit enter 2. a) Connect to zookeeper using zkCli and hit enter it will come as connected b) just input space and then hit enter Console log is as below:- [zk: localhost:2181(CONNECTING) 0] 2016-08-25 16:54:48,143 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxnSocketNetty$ZKClientPipelineFactory@363] - SSL handler added for channel: null 2016-08-25 16:54:48,175 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@980] - Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:44592, server: localhost/0:0:0:0:0:0:0:1:2181 2016-08-25 16:54:48,178 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxnSocketNetty$1@146] - channel is connected: [id: 0xd03f4226, /0:0:0:0:0:0:0:1:44592 => localhost/0:0:0:0:0:0:0:1:2181] 2016-08-25 16:54:48,288 [myid:] - INFO [New I/O worker #1:ClientCnxn$SendThread@1400] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x101a00305cc0005, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null Exception in thread "main" java.lang.NullPointerException at org.apache.zookeeper.ZooKeeperMain$MyCommandOptions.getArgArray(ZooKeeperMain.java:171) at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:613) at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:577) at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:360) at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:320) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:280) ---------------------------------------- After connection is established, input space and hit enter [zk: localhost:2181(CONNECTING) 0] 2016-08-25 16:56:22,445 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxnSocketNetty$ZKClientPipelineFactory@363] - SSL handler added for channel: null 2016-08-25 16:56:22,481 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@980] - Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:44594, server: localhost/0:0:0:0:0:0:0:1:2181 2016-08-25 16:56:22,484 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxnSocketNetty$1@146] - channel is connected: [id: 0xe6d3a461, /0:0:0:0:0:0:0:1:44594 => localhost/0:0:0:0:0:0:0:1:2181] 2016-08-25 16:56:22,597 [myid:] - INFO [New I/O worker #1:ClientCnxn$SendThread@1400] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x101a00305cc0007, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0] Exception in thread "main" java.lang.NullPointerException at org.apache.zookeeper.ZooKeeperMain$MyCommandOptions.getArgArray(ZooKeeperMain.java:171) at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:613) at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:577) at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:360) at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:320) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:280) |
patch | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 25 weeks, 3 days ago | submitting the patch again | 0|i32rx3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2531 | Configuration as "zookeeper.secure=true/false" can be introduced and reading and verifying all ssl related configuration (like secureport, keystore, truststore, corresponding password) should be done only when "zookeeper.secure" flag is true |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 25/Aug/16 02:58 | 16/Sep/16 07:36 | 3.5.1 | server | 0 | 1 | Configuration as "zookeeper.secure=true/false" can be introduced and reading and verifying all ssl related configuration (like secureport, keystore, truststore, corresponding password) should be done only when "zookeeper.secure" flag is true | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks ago | 0|i32rpz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2530 | When zookeeper started in SSL mode, set a "get" watcher on a znode from zkCli client, restart the zkCli, the "Data" watcher still present. Trying removing that watcher fails saying no watcher available |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 25/Aug/16 02:40 | 28/Jun/17 03:08 | 3.5.1 | server | 0 | 2 | ZOOKEEPER-2358 | When zookeeper started in SSL mode, set a "get" watcher on a znode from zkCli client, restart the zkCli, the "Data" watcher still present. Trying removing that watcher fails saying no watcher available Steps to reproduce:- Start Zookeeper server in ssl mode by configuring all required ssl configuration Start zkCli and set a "Data" watcher "get -w " Restart the zkCli client Check the watcher. Still the Data watcher is available Try to remove the watcher using removewachers, it fails saying no watcher available. BLR1000007042:~ # echo wchs | netcat localhost 3181 1 connections watching 1 paths Total watches:1 BLR1000007042:~ # echo wchs | netcat localhost 3181 1 connections watching 1 paths Total watches:1 Client log as below:- [zk: localhost:2181(CONNECTED) 0] get -w /test hello1 [zk: localhost:2181(CONNECTED) 1] quit 2016-08-25 14:22:00,706 [myid:] - INFO [main:ClientCnxnSocketNetty@201] - channel is told closing 2016-08-25 14:22:00,706 [myid:] - INFO [main:ZooKeeper@1110] - Session: 0x1019f8940e20000 closed 2016-08-25 14:22:00,706 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@542] - EventThread shut down for session: 0x1019f8940e20000 2016-08-25 14:22:00,707 [myid:] - INFO [New I/O worker #1:ClientCnxnSocketNetty$ZKClientHandler@377] - channel is disconnected: [id: 0x9dab735e, /127.0.0.1:57415 :> localhost/127.0.0.1:2181] 2016-08-25 14:22:00,707 [myid:] - INFO [New I/O worker #1:ClientCnxnSocketNetty@201] - channel is told closing BLR1000007042:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin # ./zkCli.sh /usr/bin/java Connecting to localhost:2181 2016-08-25 14:22:15,079 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.1-alpha--1, built on 08/18/2016 08:20 GMT 2016-08-25 14:22:15,083 [myid:] - INFO [main:Environment@109] - Client environment:host.name=BLR1000007042 2016-08-25 14:22:15,084 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.7.0_79 2016-08-25 14:22:15,086 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation 2016-08-25 14:22:15,086 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/java/jdk1.7.0_79/jre 2016-08-25 14:22:15,086 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../build/classes:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../build/lib/*.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/slf4j-log4j12-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/slf4j-api-1.7.5.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/servlet-api-2.5-20081211.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/netty-3.7.0.Final.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/log4j-1.2.16.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/jline-2.11.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/jetty-util-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/jetty-6.1.26.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/javacc.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/jackson-mapper-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/jackson-core-asl-1.9.11.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/commons-cli-1.2.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../zookeeper-3.5.1-alpha.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../src/java/lib/ant-eclipse-1.0-jvm1.2.jar:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin/../conf: 2016-08-25 14:22:15,087 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-08-25 14:22:15,087 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp 2016-08-25 14:22:15,087 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler= 2016-08-25 14:22:15,087 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux 2016-08-25 14:22:15,087 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64 2016-08-25 14:22:15,087 [myid:] - INFO [main:Environment@109] - Client environment:os.version=3.0.76-0.11-default 2016-08-25 14:22:15,087 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root 2016-08-25 14:22:15,087 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root 2016-08-25 14:22:15,088 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin 2016-08-25 14:22:15,088 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=52MB 2016-08-25 14:22:15,090 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=227MB 2016-08-25 14:22:15,090 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=57MB 2016-08-25 14:22:15,095 [myid:] - INFO [main:ZooKeeper@716] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@272f15b0 Welcome to ZooKeeper! 2016-08-25 14:22:15,182 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1138] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) JLine support is enabled [INFO] Unable to bind key for unsupported operation: backward-delete-word [INFO] Unable to bind key for unsupported operation: backward-delete-word [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [INFO] Unable to bind key for unsupported operation: up-history [INFO] Unable to bind key for unsupported operation: down-history [zk: localhost:2181(CONNECTING) 0] 2016-08-25 14:22:15,502 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxnSocketNetty$ZKClientPipelineFactory@363] - SSL handler added for channel: null 2016-08-25 14:22:15,537 [myid:] - INFO [New I/O worker #1:ClientCnxn$SendThread@980] - Socket connection established, initiating session, client: /127.0.0.1:57420, server: localhost/127.0.0.1:2181 2016-08-25 14:22:15,540 [myid:] - INFO [New I/O worker #1:ClientCnxnSocketNetty$1@146] - channel is connected: [id: 0xfc4fe483, /127.0.0.1:57420 => localhost/127.0.0.1:2181] 2016-08-25 14:22:15,673 [myid:] - INFO [New I/O worker #1:ClientCnxn$SendThread@1400] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1019f8940e20001, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0] removewatches /test -a 2016-08-25 14:24:14,420 [myid:] - ERROR [New I/O worker #1:ClientCnxn@725] - Failed to find watcher! org.apache.zookeeper.KeeperException$NoWatcherException: KeeperErrorCode = No such watcher for /test at org.apache.zookeeper.ZooKeeper$ZKWatchManager.containsWatcher(ZooKeeper.java:377) at org.apache.zookeeper.ZooKeeper$ZKWatchManager.removeWatcher(ZooKeeper.java:252) at org.apache.zookeeper.WatchDeregistration.unregister(WatchDeregistration.java:58) at org.apache.zookeeper.ClientCnxn.finishPacket(ClientCnxn.java:712) at org.apache.zookeeper.ClientCnxn.access$1500(ClientCnxn.java:97) at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:948) at org.apache.zookeeper.ClientCnxnSocketNetty$ZKClientHandler.messageReceived(ClientCnxnSocketNetty.java:419) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) KeeperErrorCode = No such watcher for /test [zk: localhost:2181(CONNECTED) 1] |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 38 weeks, 1 day ago | 0|i32ron: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2529 | ZOOKEEPER-3170 Flaky Test: org.apache.zookeeper.test.NonRecoverableErrorTest.testZooKeeperServiceAvailableOnLeader |
Sub-task | Resolved | Major | Cannot Reproduce | Andor Molnar | Michael Han | Michael Han | 24/Aug/16 20:14 | 25/Oct/18 10:58 | 25/Oct/18 10:58 | 3.4.9, 3.5.3 | 0 | 2 | ZOOKEEPER-2135, ZOOKEEPER-2247 | This flaky is introduced by ZOOKEEPER-2247 recently. {noformat} Error Message IOException is expected due to error injected to transaction log commit Stacktrace junit.framework.AssertionFailedError: IOException is expected due to error injected to transaction log commit at org.apache.zookeeper.test.NonRecoverableErrorTest.testZooKeeperServiceAvailableOnLeader(NonRecoverableErrorTest.java:115) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.lang.Thread.run(Thread.java:745) Standard Output 2016-08-24 19:20:17,000 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-08-24 19:20:17,142 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-08-24 19:20:17,215 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testZooKeeperServiceAvailableOnLeader 2016-08-24 19:20:17,222 [myid:] - INFO [Time-limited test:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testZooKeeperServiceAvailableOnLeader 2016-08-24 19:20:17,228 [myid:] - INFO [Time-limited test:PortAssignment@151] - Test process 1/8 using ports from 11221 - 13913. 2016-08-24 19:20:17,230 [myid:] - INFO [Time-limited test:PortAssignment@85] - Assigned port 11222 from range 11221 - 13913. 2016-08-24 19:20:17,231 [myid:] - INFO [Time-limited test:PortAssignment@85] - Assigned port 11223 from range 11221 - 13913. 2016-08-24 19:20:17,231 [myid:] - INFO [Time-limited test:PortAssignment@85] - Assigned port 11224 from range 11221 - 13913. 2016-08-24 19:20:17,232 [myid:] - INFO [Time-limited test:PortAssignment@85] - Assigned port 11225 from range 11221 - 13913. 2016-08-24 19:20:17,232 [myid:] - INFO [Time-limited test:PortAssignment@85] - Assigned port 11226 from range 11221 - 13913. 2016-08-24 19:20:17,232 [myid:] - INFO [Time-limited test:PortAssignment@85] - Assigned port 11227 from range 11221 - 13913. 2016-08-24 19:20:17,233 [myid:] - INFO [Time-limited test:PortAssignment@85] - Assigned port 11228 from range 11221 - 13913. 2016-08-24 19:20:17,233 [myid:] - INFO [Time-limited test:PortAssignment@85] - Assigned port 11229 from range 11221 - 13913. 2016-08-24 19:20:17,234 [myid:] - INFO [Time-limited test:PortAssignment@85] - Assigned port 11230 from range 11221 - 13913. 2016-08-24 19:20:17,256 [myid:] - INFO [Time-limited test:QuorumPeerTestBase$MainThread@131] - id = 0 tmpDir = /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7330020881446886387.junit.dir clientPort = 11222 adminServerPort = 8080 2016-08-24 19:20:17,262 [myid:] - INFO [Time-limited test:QuorumPeerTestBase$MainThread@131] - id = 1 tmpDir = /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7260737707555606353.junit.dir clientPort = 11225 adminServerPort = 8080 2016-08-24 19:20:17,267 [myid:] - INFO [Time-limited test:QuorumPeerTestBase$MainThread@131] - id = 2 tmpDir = /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test1038022970424066351.junit.dir clientPort = 11228 adminServerPort = 8080 2016-08-24 19:20:17,268 [myid:] - INFO [Thread-1:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7260737707555606353.junit.dir/zoo.cfg 2016-08-24 19:20:17,268 [myid:] - INFO [Thread-0:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7330020881446886387.junit.dir/zoo.cfg 2016-08-24 19:20:17,269 [myid:] - INFO [Thread-2:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test1038022970424066351.junit.dir/zoo.cfg 2016-08-24 19:20:17,270 [myid:] - INFO [Thread-1:QuorumPeerConfig@318] - clientPortAddress is 0.0.0.0/0.0.0.0:11225 2016-08-24 19:20:17,270 [myid:] - INFO [Thread-0:QuorumPeerConfig@318] - clientPortAddress is 0.0.0.0/0.0.0.0:11222 2016-08-24 19:20:17,271 [myid:] - INFO [Thread-0:QuorumPeerConfig@322] - secureClientPort is not set 2016-08-24 19:20:17,270 [myid:] - INFO [Thread-1:QuorumPeerConfig@322] - secureClientPort is not set 2016-08-24 19:20:17,270 [myid:] - INFO [Thread-2:QuorumPeerConfig@318] - clientPortAddress is 0.0.0.0/0.0.0.0:11228 2016-08-24 19:20:17,271 [myid:] - INFO [Thread-2:QuorumPeerConfig@322] - secureClientPort is not set 2016-08-24 19:20:17,276 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-08-24 19:20:17,280 [myid:] - INFO [Time-limited test:ClientBase@248] - server 127.0.0.1:11222 not up java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:99) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:69) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:241) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:232) at org.apache.zookeeper.test.NonRecoverableErrorTest.testZooKeeperServiceAvailableOnLeader(NonRecoverableErrorTest.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.lang.Thread.run(Thread.java:745) 2016-08-24 19:20:17,295 [myid:1] - INFO [Thread-1:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-08-24 19:20:17,295 [myid:2] - INFO [Thread-2:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-08-24 19:20:17,295 [myid:2] - INFO [Thread-2:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-08-24 19:20:17,295 [myid:2] - INFO [Thread-2:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-08-24 19:20:17,295 [myid:1] - INFO [Thread-1:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-08-24 19:20:17,296 [myid:1] - INFO [Thread-1:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-08-24 19:20:17,297 [myid:2] - INFO [Thread-2:ManagedUtil@46] - Log4j found with jmx enabled. 2016-08-24 19:20:17,298 [myid:1] - INFO [Thread-1:ManagedUtil@46] - Log4j found with jmx enabled. 2016-08-24 19:20:17,300 [myid:0] - INFO [Thread-0:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-08-24 19:20:17,300 [myid:0] - INFO [Thread-0:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-08-24 19:20:17,300 [myid:0] - INFO [Thread-0:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-08-24 19:20:17,300 [myid:0] - INFO [Thread-0:ManagedUtil@46] - Log4j found with jmx enabled. 2016-08-24 19:20:17,417 [myid:2] - ERROR [Thread-2:ManagedUtil@114] - Problems while registering log4j jmx beans! javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-24 19:20:17,418 [myid:2] - WARN [Thread-2:QuorumPeerMain@133] - Unable to register log4j JMX control javax.management.JMException: javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:115) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-24 19:20:17,425 [myid:2] - INFO [Thread-2:QuorumPeerMain@136] - Starting quorum peer 2016-08-24 19:20:17,443 [myid:2] - INFO [Thread-2:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 32 worker threads, and 64 kB direct buffers. 2016-08-24 19:20:17,444 [myid:0] - ERROR [Thread-0:HierarchyDynamicMBean@138] - Could not add loggerMBean for [root]. javax.management.InstanceAlreadyExistsException: log4j:logger=root at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.HierarchyDynamicMBean.addLoggerMBean(HierarchyDynamicMBean.java:125) at org.apache.log4j.jmx.HierarchyDynamicMBean.postRegister(HierarchyDynamicMBean.java:263) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-24 19:20:17,445 [myid:0] - ERROR [Thread-0:ManagedUtil@114] - Problems while registering log4j jmx beans! javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-24 19:20:17,446 [myid:0] - WARN [Thread-0:QuorumPeerMain@133] - Unable to register log4j JMX control javax.management.JMException: javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:115) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-24 19:20:17,447 [myid:0] - INFO [Thread-0:QuorumPeerMain@136] - Starting quorum peer 2016-08-24 19:20:17,447 [myid:0] - INFO [Thread-0:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 32 worker threads, and 64 kB direct buffers. 2016-08-24 19:20:17,454 [myid:0] - INFO [Thread-0:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11222 2016-08-24 19:20:17,454 [myid:2] - INFO [Thread-2:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11228 2016-08-24 19:20:17,473 [myid:1] - INFO [Thread-1:QuorumPeerMain@136] - Starting quorum peer 2016-08-24 19:20:17,473 [myid:1] - INFO [Thread-1:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 32 worker threads, and 64 kB direct buffers. 2016-08-24 19:20:17,474 [myid:1] - INFO [Thread-1:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11225 2016-08-24 19:20:17,488 [myid:0] - INFO [Thread-0:QuorumPeer@1327] - Local sessions disabled 2016-08-24 19:20:17,488 [myid:2] - INFO [Thread-2:QuorumPeer@1327] - Local sessions disabled 2016-08-24 19:20:17,488 [myid:1] - INFO [Thread-1:QuorumPeer@1327] - Local sessions disabled 2016-08-24 19:20:17,488 [myid:2] - INFO [Thread-2:QuorumPeer@1338] - Local session upgrading disabled 2016-08-24 19:20:17,488 [myid:0] - INFO [Thread-0:QuorumPeer@1338] - Local session upgrading disabled 2016-08-24 19:20:17,488 [myid:0] - INFO [Thread-0:QuorumPeer@1305] - tickTime set to 4000 2016-08-24 19:20:17,488 [myid:2] - INFO [Thread-2:QuorumPeer@1305] - tickTime set to 4000 2016-08-24 19:20:17,489 [myid:2] - INFO [Thread-2:QuorumPeer@1349] - minSessionTimeout set to 8000 2016-08-24 19:20:17,489 [myid:2] - INFO [Thread-2:QuorumPeer@1360] - maxSessionTimeout set to 80000 2016-08-24 19:20:17,489 [myid:2] - INFO [Thread-2:QuorumPeer@1375] - initLimit set to 10 2016-08-24 19:20:17,488 [myid:1] - INFO [Thread-1:QuorumPeer@1338] - Local session upgrading disabled 2016-08-24 19:20:17,490 [myid:1] - INFO [Thread-1:QuorumPeer@1305] - tickTime set to 4000 2016-08-24 19:20:17,490 [myid:1] - INFO [Thread-1:QuorumPeer@1349] - minSessionTimeout set to 8000 2016-08-24 19:20:17,490 [myid:1] - INFO [Thread-1:QuorumPeer@1360] - maxSessionTimeout set to 80000 2016-08-24 19:20:17,490 [myid:1] - INFO [Thread-1:QuorumPeer@1375] - initLimit set to 10 2016-08-24 19:20:17,488 [myid:0] - INFO [Thread-0:QuorumPeer@1349] - minSessionTimeout set to 8000 2016-08-24 19:20:17,491 [myid:0] - INFO [Thread-0:QuorumPeer@1360] - maxSessionTimeout set to 80000 2016-08-24 19:20:17,491 [myid:0] - INFO [Thread-0:QuorumPeer@1375] - initLimit set to 10 2016-08-24 19:20:17,514 [myid:2] - INFO [Thread-2:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-24 19:20:17,514 [myid:1] - INFO [Thread-1:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-24 19:20:17,515 [myid:0] - INFO [Thread-0:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-24 19:20:17,516 [myid:0] - INFO [Thread-0:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-24 19:20:17,516 [myid:2] - INFO [Thread-2:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-24 19:20:17,521 [myid:1] - INFO [Thread-1:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-24 19:20:17,535 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-08-24 19:20:17,539 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:41083 2016-08-24 19:20:17,554 [myid:0] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:41083 2016-08-24 19:20:17,568 [myid:0] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:41083 (no session established for client) 2016-08-24 19:20:17,595 [myid:0] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11224 2016-08-24 19:20:17,606 [myid:1] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11227 2016-08-24 19:20:17,607 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-08-24 19:20:17,611 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-08-24 19:20:17,624 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):FastLeaderElection@894] - New election. My id = 0, proposed zxid=0x0 2016-08-24 19:20:17,626 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):FastLeaderElection@894] - New election. My id = 1, proposed zxid=0x0 2016-08-24 19:20:17,634 [myid:2] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11230 2016-08-24 19:20:17,640 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-08-24 19:20:17,640 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):FastLeaderElection@894] - New election. My id = 2, proposed zxid=0x0 2016-08-24 19:20:17,642 [myid:0] - INFO [/127.0.0.1:11224:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:45368 2016-08-24 19:20:17,645 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,649 [myid:0] - INFO [/127.0.0.1:11224:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:45369 2016-08-24 19:20:17,650 [myid:1] - INFO [/127.0.0.1:11227:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:43290 2016-08-24 19:20:17,653 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,654 [myid:0] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (1, 0) 2016-08-24 19:20:17,657 [myid:2] - INFO [/127.0.0.1:11230:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:52198 2016-08-24 19:20:17,657 [myid:1] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 1) 2016-08-24 19:20:17,659 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,659 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,659 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,679 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 2 2016-08-24 19:20:17,680 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-24 19:20:17,681 [myid:1] - INFO [/127.0.0.1:11227:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:43291 2016-08-24 19:20:17,683 [myid:0] - INFO [/127.0.0.1:11224:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:45377 2016-08-24 19:20:17,683 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-08-24 19:20:17,684 [myid:1] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 0 my id = 1 2016-08-24 19:20:17,686 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@915] - Connection broken for id 0, my id = 1, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-24 19:20:17,686 [myid:1] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-24 19:20:17,687 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,687 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@915] - Connection broken for id 1, my id = 0, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-24 19:20:17,699 [myid:0] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-24 19:20:17,696 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,701 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,702 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 1 my id = 0 error = java.net.SocketException: Broken pipe 2016-08-24 19:20:17,688 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 2 my id = 1 error = java.net.SocketException: Broken pipe 2016-08-24 19:20:17,701 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@915] - Connection broken for id 2, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-24 19:20:17,703 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-24 19:20:17,689 [myid:1] - INFO [/127.0.0.1:11227:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:43293 2016-08-24 19:20:17,708 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 2 my id = 1 2016-08-24 19:20:17,708 [myid:0] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 0 2016-08-24 19:20:17,710 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,710 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,720 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,722 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,728 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,728 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,729 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-24 19:20:17,819 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-08-24 19:20:17,820 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:41114 2016-08-24 19:20:17,822 [myid:0] - INFO [NIOWorkerThread-2:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:41114 2016-08-24 19:20:17,822 [myid:0] - INFO [NIOWorkerThread-2:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:41114 (no session established for client) 2016-08-24 19:20:17,923 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1,name2=LeaderElection] 2016-08-24 19:20:17,923 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):QuorumPeer@1109] - FOLLOWING 2016-08-24 19:20:17,928 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=LeaderElection] 2016-08-24 19:20:17,929 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):QuorumPeer@1121] - LEADING 2016-08-24 19:20:17,929 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.0,name2=LeaderElection] 2016-08-24 19:20:17,930 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):QuorumPeer@1109] - FOLLOWING 2016-08-24 19:20:17,932 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):Learner@88] - TCP NoDelay set to: true 2016-08-24 19:20:17,937 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):Leader@63] - TCP NoDelay set to: true 2016-08-24 19:20:17,937 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):Leader@83] - zookeeper.leader.maxConcurrentSnapshots = 10 2016-08-24 19:20:17,938 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):Leader@85] - zookeeper.leader.maxConcurrentSnapshotTimeout = 5 2016-08-24 19:20:17,942 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:zookeeper.version=3.6.0-SNAPSHOT-1757568, built on 08/24/2016 19:17 GMT 2016-08-24 19:20:17,942 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:host.name=asf907.gq1.ygridcore.net 2016-08-24 19:20:17,942 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:java.version=1.7.0_80 2016-08-24 19:20:17,943 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:java.vendor=Oracle Corporation 2016-08-24 19:20:17,943 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:java.home=/usr/local/asfpackages/java/jdk1.7.0_80/jre 2016-08-24 19:20:17,943 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jetty-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jetty-util-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/servlet-api-2.5-20081211.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/slf4j-log4j12-1.7.5.jar:/usr/local/asfpackages/ant/apache-ant-1.9.7/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-08-24 19:20:17,943 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-08-24 19:20:17,943 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:java.io.tmpdir=/tmp 2016-08-24 19:20:17,943 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:java.compiler=<NA> 2016-08-24 19:20:17,943 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:os.name=Linux 2016-08-24 19:20:17,943 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:os.arch=amd64 2016-08-24 19:20:17,944 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:os.version=3.13.0-36-lowlatency 2016-08-24 19:20:17,944 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:user.name=jenkins 2016-08-24 19:20:17,944 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:user.home=/home/jenkins 2016-08-24 19:20:17,944 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:user.dir=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test 2016-08-24 19:20:17,944 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:os.memory.free=369MB 2016-08-24 19:20:17,944 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:os.memory.max=491MB 2016-08-24 19:20:17,944 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Environment@109] - Server environment:os.memory.total=491MB 2016-08-24 19:20:17,946 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):ZooKeeperServer@889] - minSessionTimeout set to 8000 2016-08-24 19:20:17,946 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):ZooKeeperServer@889] - minSessionTimeout set to 8000 2016-08-24 19:20:17,946 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):ZooKeeperServer@889] - minSessionTimeout set to 8000 2016-08-24 19:20:17,946 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):ZooKeeperServer@898] - maxSessionTimeout set to 80000 2016-08-24 19:20:17,948 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):ZooKeeperServer@159] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7260737707555606353.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7260737707555606353.junit.dir/data/version-2 2016-08-24 19:20:17,948 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):Follower@66] - FOLLOWING - LEADER ELECTION TOOK - 25 MS 2016-08-24 19:20:17,946 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):ZooKeeperServer@898] - maxSessionTimeout set to 80000 2016-08-24 19:20:17,947 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):ZooKeeperServer@898] - maxSessionTimeout set to 80000 2016-08-24 19:20:17,953 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):ZooKeeperServer@159] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test1038022970424066351.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test1038022970424066351.junit.dir/data/version-2 2016-08-24 19:20:17,953 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):ZooKeeperServer@159] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7330020881446886387.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7330020881446886387.junit.dir/data/version-2 2016-08-24 19:20:17,953 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Follower@66] - FOLLOWING - LEADER ELECTION TOOK - 23 MS 2016-08-24 19:20:17,953 [myid:1] - WARN [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):Learner@273] - Unexpected exception, tries=0, remaining init limit=40000, connecting to /127.0.0.1:11229 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.Learner.sockConnect(Learner.java:227) at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:256) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:74) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1111) 2016-08-24 19:20:17,954 [myid:0] - WARN [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Learner@273] - Unexpected exception, tries=0, remaining init limit=40000, connecting to /127.0.0.1:11229 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.Learner.sockConnect(Learner.java:227) at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:256) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:74) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1111) 2016-08-24 19:20:17,956 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):Leader@412] - LEADING - LEADER ELECTION TOOK - 27 MS 2016-08-24 19:20:17,960 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test1038022970424066351.junit.dir/data/version-2/snapshot.0 2016-08-24 19:20:18,073 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-08-24 19:20:18,075 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:41147 2016-08-24 19:20:18,080 [myid:0] - INFO [NIOWorkerThread-3:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:41147 2016-08-24 19:20:18,080 [myid:0] - INFO [NIOWorkerThread-3:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:41147 (no session established for client) 2016-08-24 19:20:18,332 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-08-24 19:20:18,333 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:41177 2016-08-24 19:20:18,334 [myid:0] - INFO [NIOWorkerThread-4:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:41177 2016-08-24 19:20:18,335 [myid:0] - INFO [NIOWorkerThread-4:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:41177 (no session established for client) 2016-08-24 19:20:18,585 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-08-24 19:20:18,586 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:41205 2016-08-24 19:20:18,587 [myid:0] - INFO [NIOWorkerThread-5:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:41205 2016-08-24 19:20:18,588 [myid:0] - INFO [NIOWorkerThread-5:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:41205 (no session established for client) 2016-08-24 19:20:18,838 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-08-24 19:20:18,840 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:41223 2016-08-24 19:20:18,865 [myid:0] - INFO [NIOWorkerThread-6:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:41223 2016-08-24 19:20:18,865 [myid:0] - INFO [NIOWorkerThread-6:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:41223 (no session established for client) 2016-08-24 19:20:18,965 [myid:2] - INFO [LearnerHandler-/127.0.0.1:40597:LearnerHandler@382] - Follower sid: 1 : info : 127.0.0.1:11226:11227:participant;127.0.0.1:11225 2016-08-24 19:20:18,965 [myid:2] - INFO [LearnerHandler-/127.0.0.1:40598:LearnerHandler@382] - Follower sid: 0 : info : 127.0.0.1:11223:11224:participant;127.0.0.1:11222 2016-08-24 19:20:18,972 [myid:2] - INFO [LearnerHandler-/127.0.0.1:40598:LearnerHandler@683] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 lastProcessedZxid=0x0 peerLastZxid=0x0 2016-08-24 19:20:18,972 [myid:2] - INFO [LearnerHandler-/127.0.0.1:40597:LearnerHandler@683] - Synchronizing with Follower sid: 1 maxCommittedLog=0x0 minCommittedLog=0x0 lastProcessedZxid=0x0 peerLastZxid=0x0 2016-08-24 19:20:18,972 [myid:2] - INFO [LearnerHandler-/127.0.0.1:40598:LearnerHandler@727] - Sending DIFF zxid=0x0 for peer sid: 0 2016-08-24 19:20:18,972 [myid:2] - INFO [LearnerHandler-/127.0.0.1:40597:LearnerHandler@727] - Sending DIFF zxid=0x0 for peer sid: 1 2016-08-24 19:20:18,974 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Learner@366] - Getting a diff from the leader 0x0 2016-08-24 19:20:18,974 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):Learner@366] - Getting a diff from the leader 0x0 2016-08-24 19:20:18,978 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Learner@509] - Learner received NEWLEADER message 2016-08-24 19:20:18,978 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):Learner@509] - Learner received NEWLEADER message 2016-08-24 19:20:18,980 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7330020881446886387.junit.dir/data/version-2/snapshot.0 2016-08-24 19:20:18,981 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/tmp/test7260737707555606353.junit.dir/data/version-2/snapshot.0 2016-08-24 19:20:18,982 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):Leader@1255] - Have quorum of supporters, sids: [ [0, 2],[0, 2] ]; starting up and setting last processed zxid: 0x100000000 2016-08-24 19:20:19,012 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):CommitProcessor@318] - Configuring CommitProcessor with 16 worker threads. 2016-08-24 19:20:19,056 [myid:2] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11228)(secure=disabled):ContainerManager@64] - Using checkIntervalMs=60000 maxPerMinute=10000 2016-08-24 19:20:19,062 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):Learner@493] - Learner received UPTODATE message 2016-08-24 19:20:19,062 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Learner@493] - Learner received UPTODATE message 2016-08-24 19:20:19,067 [myid:0] - INFO [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):CommitProcessor@318] - Configuring CommitProcessor with 16 worker threads. 2016-08-24 19:20:19,080 [myid:1] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):CommitProcessor@318] - Configuring CommitProcessor with 16 worker threads. 2016-08-24 19:20:19,116 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-08-24 19:20:19,117 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:41246 2016-08-24 19:20:19,120 [myid:0] - INFO [NIOWorkerThread-7:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:41246 2016-08-24 19:20:19,122 [myid:0] - INFO [NIOWorkerThread-7:StatCommand@49] - Stat command output 2016-08-24 19:20:19,123 [myid:0] - INFO [NIOWorkerThread-7:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:41246 (no session established for client) 2016-08-24 19:20:19,123 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11225 2016-08-24 19:20:19,124 [myid:1] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11225:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:55153 2016-08-24 19:20:19,125 [myid:1] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:55153 2016-08-24 19:20:19,125 [myid:1] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-08-24 19:20:19,126 [myid:1] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:55153 (no session established for client) 2016-08-24 19:20:19,126 [myid:] - INFO [Time-limited test:FourLetterWordMain@85] - connecting to 127.0.0.1 11228 2016-08-24 19:20:19,127 [myid:2] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11228:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:59066 2016-08-24 19:20:19,130 [myid:2] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:59066 2016-08-24 19:20:19,131 [myid:2] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-08-24 19:20:19,131 [myid:2] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:59066 (no session established for client) 2016-08-24 19:20:19,138 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:zookeeper.version=3.6.0-SNAPSHOT-1757568, built on 08/24/2016 19:17 GMT 2016-08-24 19:20:19,138 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:host.name=asf907.gq1.ygridcore.net 2016-08-24 19:20:19,138 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:java.version=1.7.0_80 2016-08-24 19:20:19,139 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:java.vendor=Oracle Corporation 2016-08-24 19:20:19,139 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:java.home=/usr/local/asfpackages/java/jdk1.7.0_80/jre 2016-08-24 19:20:19,139 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jetty-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jetty-util-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/servlet-api-2.5-20081211.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/lib/slf4j-log4j12-1.7.5.jar:/usr/local/asfpackages/ant/apache-ant-1.9.7/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-08-24 19:20:19,139 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-08-24 19:20:19,140 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:java.io.tmpdir=/tmp 2016-08-24 19:20:19,140 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:java.compiler=<NA> 2016-08-24 19:20:19,140 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:os.name=Linux 2016-08-24 19:20:19,140 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:os.arch=amd64 2016-08-24 19:20:19,141 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:os.version=3.13.0-36-lowlatency 2016-08-24 19:20:19,141 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:user.name=jenkins 2016-08-24 19:20:19,141 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:user.home=/home/jenkins 2016-08-24 19:20:19,141 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:user.dir=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/build/test 2016-08-24 19:20:19,141 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:os.memory.free=462MB 2016-08-24 19:20:19,142 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:os.memory.max=491MB 2016-08-24 19:20:19,142 [myid:] - INFO [Time-limited test:Environment@109] - Client environment:os.memory.total=491MB 2016-08-24 19:20:19,145 [myid:] - INFO [Time-limited test:ZooKeeper@855] - Initiating client connection, connectString=127.0.0.1:11222 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@2b6c7d59 2016-08-24 19:20:19,169 [myid:127.0.0.1:11222] - INFO [Time-limited test-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-08-24 19:20:19,172 [myid:0] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:41255 2016-08-24 19:20:19,177 [myid:127.0.0.1:11222] - INFO [Time-limited test-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:41255, server: 127.0.0.1/127.0.0.1:11222 2016-08-24 19:20:19,201 [myid:0] - INFO [NIOWorkerThread-8:ZooKeeperServer@995] - Client attempting to establish new session at /127.0.0.1:41255 2016-08-24 19:20:19,206 [myid:2] - INFO [SyncThread:2:FileTxnLog@204] - Creating new log file: log.100000001 2016-08-24 19:20:19,206 [myid:0] - WARN [QuorumPeer[myid=0](plain=/127.0.0.1:11222)(secure=disabled):Follower@122] - Got zxid 0x100000001 expected 0x1 2016-08-24 19:20:19,206 [myid:1] - WARN [QuorumPeer[myid=1](plain=/127.0.0.1:11225)(secure=disabled):Follower@122] - Got zxid 0x100000001 expected 0x1 2016-08-24 19:20:19,207 [myid:0] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.100000001 2016-08-24 19:20:19,207 [myid:1] - INFO [SyncThread:1:FileTxnLog@204] - Creating new log file: log.100000001 2016-08-24 19:20:19,229 [myid:0] - INFO [CommitProcWorkThread-1:ZooKeeperServer@709] - Established session 0x2162abf710000 with negotiated timeout 30000 for client /127.0.0.1:41255 2016-08-24 19:20:19,231 [myid:127.0.0.1:11222] - INFO [Time-limited test-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:11222, sessionid = 0x2162abf710000, negotiated timeout = 30000 2016-08-24 19:20:19,251 [myid:2] - INFO [SyncThread:2:FileTxnLog@204] - Creating new log file: log.100000003 2016-08-24 19:20:19,251 [myid:2] - ERROR [SyncThread:2:ZooKeeperCriticalThread@48] - Severe unrecoverable error, from thread : SyncThread:2 java.io.IOException: Input/output error at org.apache.zookeeper.test.NonRecoverableErrorTest$1.commit(NonRecoverableErrorTest.java:101) at org.apache.zookeeper.server.ZKDatabase.commit(ZKDatabase.java:557) at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:178) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:113) 2016-08-24 19:20:19,251 [myid:2] - INFO [SyncThread:2:ZooKeeperServerListenerImpl@42] - Thread SyncThread:2 exits, error code 1 2016-08-24 19:20:19,252 [myid:2] - INFO [SyncThread:2:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-08-24 19:20:19,253 [myid:] - INFO [Time-limited test:JUnit4ZKTestRunner$LoggedInvokeMethod@98] - TEST METHOD FAILED testZooKeeperServiceAvailableOnLeader java.lang.AssertionError: IOException is expected due to error injected to transaction log commit at org.junit.Assert.fail(Assert.java:88) at org.apache.zookeeper.test.NonRecoverableErrorTest.testZooKeeperServiceAvailableOnLeader(NonRecoverableErrorTest.java:115) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.lang.Thread.run(Thread.java:745) 2016-08-24 19:20:19,254 [myid:] - INFO [main:ZKTestCase$1@70] - FAILED testZooKeeperServiceAvailableOnLeader java.lang.AssertionError: IOException is expected due to error injected to transaction log commit at org.junit.Assert.fail(Assert.java:88) at org.apache.zookeeper.test.NonRecoverableErrorTest.testZooKeeperServiceAvailableOnLeader(NonRecoverableErrorTest.java:115) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.lang.Thread.run(Thread.java:745) 2016-08-24 19:20:19,255 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testZooKeeperServiceAvailableOnLeader {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 21 weeks ago | 0|i32rcn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2528 | ZooKeeper cluster can become unavailable due to power failures |
Bug | Open | Critical | Unresolved | Unassigned | Ramnatthan Alagappan | Ramnatthan Alagappan | 24/Aug/16 16:45 | 09/Mar/17 15:39 | 3.4.8 | server | 0 | 3 | A normal ZooKeeper cluster of 3 nodes running on 3 Linux machines. | ZooKeeper cluster can become unavailable if power failures happen at certain specific points in time. Details: I am running a three-node ZooKeeper cluster. I perform a simple update from a client machine. When I try to update a value, ZooKeeper creates a new log file (for example, when the current log is fully utilized). First, it creates the file and appends some header information to the newly created log. The system call sequence looks like below: creat(log.200000001) append(log.200000001, offset=0, count=16) Now, if a power failure happens just after the creat of the log file but before the append of the header information, the node simply crashes with an EOF exception. If the same problem occurs at two or more nodes in my three-node cluster, the entire cluster becomes unavailable as the majority of servers have crashed because of the above problem. A power failure at the same time across multiple nodes may be possible in single data center or single rack deployment scenarios. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks, 1 day ago | 0|i32qxb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2527 | Connection watch is not getting cleared when watch is created as part of get and it is fired as part of set and client is closed |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 24/Aug/16 07:34 | 28/Jun/17 03:08 | 3.5.1 | server | 0 | 2 | ZOOKEEPER-2358 | Connection watch is not getting cleared when watch is created as part of get and it is fired as part of set and client is closed Steps to reproduce:- Configure the Zookeeper in ssl mode and start the same connect to zookeeper using ./zkCli.sh Check the watch status as zero. set watch as below :- get -w /test Check the watch it is like below:- BLR1000007042:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin # echo wchs | netcat 10.18.101.80 2181 1 connections watching 1 paths Total watches:1 let the watch fire as part of below- set /test hello Here watch is fired when set is done Close the client Check for the watch. It is not zero but 1 BLR1000007042:/home/Rakesh/Zookeeper/18_Aug/zookeeper-3.5.1-alpha/bin # echo wchs | netcat 10.18.101.80 2181 1 connections watching 0 paths Total watches:0 If we repeat again and again it will keep increasing. Tried without SSL mode and it is working fine in that mode. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 38 weeks, 1 day ago | 0|i32pxz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2526 | Add config flag to prohibit connections from clients that don't do Sasl auth |
Improvement | Open | Minor | Unresolved | Unassigned | Irfan Hamid | Irfan Hamid | 24/Aug/16 00:46 | 06/Apr/18 21:06 | 3.4.6 | 3.4.6 | kerberos, security, server | 1 | 5 | 432000 | 432000 | 0% | ZOOKEEPER-2462, ZOOKEEPER-1634 | According to ZOOKEEPER-1736 the flag allowSaslFailedClient will allow clients whose Sasl auth has failed the same privileges as a client that does not attempt Sasl, i.e., anonymous login. It would be nice to have a second property "allowAnonLogin" that defaults to true and allows current behavior. But if it is set to false it disconnects any clients that do not attempt Sasl auth or do not complete it successfully. The motivation would be to protect a shared ZooKeeper ensemble in a datacenter and reduce the surface area of vulnerability by protecting the service from a resiliency/availability perspective by limiting interaction by anonymous clients. This would also protect against rogue clients that could otherwise deny service by filling up the znode storage in non-ACLed locations. I'm working off of 3.4.6 source code (that's the one we have deployed internally). This functionality could be implemented by adding a flag ServerCnxn#isAuthenticated that is set to true iff ZooKeeperServer#processSasl() succeeds and which is inspected at every incoming request and the session is closed if auth isn't done and opcode is other than Sasl or Auth: --- src/java/main/org/apache/zookeeper/server/ServerCnxn.java (revision 1757035) +++ src/java/main/org/apache/zookeeper/server/ServerCnxn.java (working copy) @@ -55,6 +55,8 @@ */ boolean isOldClient = true; + boolean isAuthenticated = false; + abstract int getSessionTimeout(); abstract void close(); --- src/java/main/org/apache/zookeeper/server/ZooKeeperServer.java (revision 1757035) +++ src/java/main/org/apache/zookeeper/server/ZooKeeperServer.java (working copy) @@ -884,11 +892,26 @@ BinaryInputArchive bia = BinaryInputArchive.getArchive(bais); RequestHeader h = new RequestHeader(); h.deserialize(bia, "header"); // Through the magic of byte buffers, txn will not be // pointing // to the start of the txn incomingBuffer = incomingBuffer.slice(); - if (h.getType() == OpCode.auth) { + if (allowAnonLogin == false && cnxn.isAuthenticated == false) { + if (!(h.getType() == OpCode.auth || + h.getType() == OpCode.ping || + h.getType() == OpCode.sasl)) { + LOG.warn(String.format("Closing client connection %s. OpCode %d received before Sasl authentication was complete and allowAnonLogin=false", + cnxn.getRemoteSocketAddress().toString(), h.getType())); + ReplyHeader rh = new ReplyHeader(h.getXid(), 0, + KeeperException.Code.AUTHFAILED.intValue()); + cnxn.sendResponse(rh, null, null); + cnxn.sendBuffer(ServerCnxnFactory.closeConn); + cnxn.disableRecv(); + } + } @@ -963,6 +986,7 @@ String authorizationID = saslServer.getAuthorizationID(); LOG.info("adding SASL authorization for authorizationID: " + authorizationID); cnxn.addAuthInfo(new Id("sasl",authorizationID)); + cnxn.isAuthenticated = true; } } catch (SaslException e) { |
0% | 0% | 432000 | 432000 | newbie, security | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 |
Patch
|
1 year, 49 weeks, 5 days ago | 0|i32p33: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2525 | Throwing Exception at zookeeper server side whenever client is closing the connection |
Bug | Open | Minor | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 23/Aug/16 05:36 | 23/Aug/16 05:36 | 3.5.1 | server | 1 | 2 | Throwing Exception at zookeeper server side whenever client is closing the connection 2016-08-17 20:38:09,030 [myid:] - WARN [New I/O worker #4:ClientCnxnSocketNetty$ZKClientHandler@432] - Exception caught: [id: 0xbb7a218d, /0:0:0:0:0:0:0:1:41679 :> localhost/0:0:0:0:0:0:0:1:2181] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at org.jboss.netty.handler.ssl.SslHandler$6.run(SslHandler.java:1580) at org.jboss.netty.channel.socket.ChannelRunnableWrapper.run(ChannelRunnableWrapper.java:40) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:71) at org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:57) at org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36) at org.jboss.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34) at org.jboss.netty.handler.ssl.SslHandler.channelClosed(SslHandler.java:1566) at org.jboss.netty.channel.Channels.fireChannelClosed(Channels.java:468) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:376) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:93) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks, 2 days ago | 0|i32n0n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2524 | Start zkServer in ssl mode, start zkCli in non-ssl mode but on ssl port then try to quit at client, it takes almost 30 seconds to quit |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 23/Aug/16 05:35 | 23/Aug/16 05:35 | 3.5.1 | 0 | 1 | Start zkServer in ssl mode, start zkCli in non-ssl mode but on ssl port then try to quit at client, it takes almost 30 seconds to quit. Steps to reproduce:- Configure the details required for SSL in zkServer and zkclient Make "-Dzookeeper.client.secure=false" for client Configure the clientPort=2181 and secureClientPort=3181 zoo.cfg file Start zookeeper server and then client as "zkCli.sh -server :3181 Then quit at client console It takes almost 30 seconds to quit. Log at server side is attached. Log at client side is as below:- [zk: 10.18.101.80:3181(CONNECTING) 0] quit 2016-08-18 15:02:19,076 [myid:] - INFO [New I/O worker #1:ClientCnxnSocketNetty$ZKClientHandler@377] - channel is disconnected: [id: 0x07b576fd, /10.18.101.80:42228 :> 10.18.101.80/10.18.101.80:3181] 2016-08-18 15:02:19,077 [myid:] - INFO [New I/O worker #1:ClientCnxnSocketNetty@201] - channezkServer.txtl is told closing 2016-08-18 15:02:19,080 [myid:] - INFO [main:ClientCnxnSocketNetty@201] - channel is told closing 2016-08-18 15:02:19,080 [myid:] - INFO [main:ZooKeeper@1110] - Session: 0x0 closed 2016-08-18 15:02:19,080 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@542] - EventThread shut down for session: 0x0 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks, 2 days ago | 0|i32n0f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2523 | No proper logging when zookeeper failed to start in ssl mode. |
Bug | Open | Minor | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 23/Aug/16 05:33 | 23/Aug/16 05:33 | 3.5.1 | server | 0 | 1 | Scenario1 :- Configure zookeeper for all but configure wrong ssl password Start the zookeeper server, it starts fine. No logs that zookeeper is not started in ssl mode though it has not started in ssl mode but normal mode. Try to connect with client, it will failed to connect in ssl port or normal port (as client is started in ssl mode) Scenario 2:- Configure the ssl port as 0 and start the server The log level is in info saying not binding .. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks, 2 days ago | 0|i32mzz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2522 | Need to document that default "secureClientPort" for Client will be 2181 and not "clientPort" when starting in SSL enable mode |
Bug | Open | Minor | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 23/Aug/16 05:12 | 23/Aug/16 05:12 | 3.5.1 | 0 | 1 | By default we configure the clientPort as 2181 in zookeeper and secureClientPort as 3181 in SSL enable zookeeper. But when start the corresponding client (which is in ssl enabled mode) fails to connect because it is trying to connect 2181 in secure mode. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks, 2 days ago | 0|i32myn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2521 | space should be truncated while reading password for keystore/truststore which is required to configure while SSL enabled |
Bug | Open | Minor | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 23/Aug/16 02:49 | 05/Feb/20 07:17 | 3.5.1 | 3.7.0, 3.5.8 | server | 0 | 5 | ZOOKEEPER-2566 | space should be truncated while reading password for keystore/truststore which is required to configure while SSL enabled. As of now if we configure the password with any heading/trailing space, the zookeeper server will fail to start. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 21 weeks, 1 day ago | 0|i32mqn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2520 | Zookeeper should accept and handle encrypted password (AES) which is required to be passed for keystore and truststore in case of SSL support |
Bug | Open | Major | Unresolved | Unassigned | Rakesh Kumar Singh | Rakesh Kumar Singh | 23/Aug/16 02:44 | 23/Aug/16 02:44 | 3.5.1 | security | 0 | 1 | When enable SSL support we need to configure the keystore and truststore details and corresponding password in zkEnv.sh/bat file. Currenlty we can only pass plain password which is not accepted because of security issue. We should provide the provision to handle the latest encryption mechanism to handle this. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks, 2 days ago | 0|i32mq7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2519 | zh->state should not be 0 while handle is active |
Bug | In Progress | Major | Unresolved | Andrew Grasso | Andrew Grasso | Andrew Grasso | 22/Aug/16 08:45 | 22/Sep/16 14:36 | 3.4.6 | c client | 0 | 3 | 0 does not correspond to any of the defined states for the zookeeper handle, so a client should not expect to see this value. But in the function {{handle_error}}, we set {{zh->state = 0}}, which a client may then see. Instead, we should set our state to be {{ZOO_CONNECTING_STATE}}. At some point the code moved away from 0 as a valid state and introduced the defined states. This broke the fix to ZOOKEEPER-800, which checks if state is 0 to know if the handle has been created but has not yet connected. We now use {{ZOO_NOTCONNECTED_STATE}} to mean this, so the check for this in {{zoo_add_auth}} must be changed. We saw this error in 3.4.6, but I believe it remains present in trunk. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 26 weeks ago | 0|i32l9j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2518 | Document how to set up tool chains for build and test C client on all platforms |
Improvement | Open | Minor | Unresolved | Unassigned | Michael Han | Michael Han | 19/Aug/16 20:14 | 19/Aug/16 20:16 | documentation | 0 | 2 | ZOOKEEPER-2505 | Add documentations on how to set up ZooKeeper development environment for C client for all platforms. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks, 6 days ago | 0|i32js7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2517 | jute.maxbuffer is ignored |
Bug | Closed | Blocker | Fixed | Mohammad Arshad | Benjamin Jaton | Benjamin Jaton | 18/Aug/16 19:41 | 17/May/17 23:43 | 30/Dec/16 17:02 | 3.5.2 | 3.5.3, 3.6.0 | 1 | 8 | ZOOKEEPER-2139 | In ClientCnxnSocket.java the parsing of the system property is erroneous: {code}packetLen = Integer.getInteger( clientConfig.getProperty(ZKConfig.JUTE_MAXBUFFER), ZKClientConfig.CLIENT_MAX_PACKET_LENGTH_DEFAULT );{code} Javadoc of Integer.getInteger states "The first argument is treated as the name of a system property", whereas here the value of the property is passed. Instead I believe the author meant to write something like: {code}packetLen = Integer.parseInt( clientConfig.getProperty( ZKConfig.JUTE_MAXBUFFER, String.valueOf(ZKClientConfig.CLIENT_MAX_PACKET_LENGTH_DEFAULT) ) );{code} |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 3 years, 11 weeks, 6 days ago | 0|i32hzz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2516 | C client calculates invalid time interval for pings et al |
Bug | Open | Minor | Unresolved | Unassigned | Hadriel Kaplan | Hadriel Kaplan | 17/Aug/16 14:43 | 24/Aug/16 18:27 | 3.4.0, 3.4.8, 3.5.0, 3.5.1, 3.6.0 | c client | 0 | 2 | ZOOKEEPER-1626 | The C-client has a function called {{calculate_interval()}} in {{zookeeper.c}}, whose purpose is to determine the number of milliseconds difference between a start and end time. Unfortunately its logic is invalid, if the number of microseconds of the end time happens to be less than the number of microseconds of the start time - which it will be about half the time, since the end time could be in the next second interval. Such a case would yield a very big negative number, making the function return an invalid value. Instead of re-creating the wheel, the {{calculate_interval()}} should use the {{timersub()}} function from {{time.h}} if it's available - if it's not #define'd, then #define it. (it's a macro, and the source code for it is readily available) |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 30 weeks, 1 day ago | 0|i32ftj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2515 | SessionTrackerImpl non-daemon thread slow to shutdown |
Bug | Open | Major | Unresolved | Unassigned | Paul Millar | Paul Millar | 17/Aug/16 12:34 | 17/Aug/16 12:34 | 3.4.6 | 1 | 4 | While calling SessionTrackerImpl#shutdown does result in the thread eventually stopping, it takes up to expirationInterval (3 seconds, by default) for the thread to finally die. Since the thread is not a daemon, this delays the shutdown of any application that makes use of ZooKeeper. I believe the issue is simple to fix: if the shutdown method notified the thread from within object's monitor then this issue will be resolved. |
easyfix | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 31 weeks, 1 day ago | 0|i32fl3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2514 | Simplify releasenotes creation for 3.4 branch - consistent with newer branches. |
Improvement | Closed | Critical | Fixed | Patrick D. Hunt | Patrick D. Hunt | Patrick D. Hunt | 16/Aug/16 17:44 | 04/Sep/16 01:28 | 16/Aug/16 19:12 | 3.4.8 | 3.4.9 | documentation | 0 | 1 | ZOOKEEPER-2364 | ZOOKEEPER-2364 introduced a new process for creating release notes for 3.5 and later branches. Backport this to 3.4 branch in order to make the release manager's life easier (and be consisten with "how to release" page). | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 31 weeks, 2 days ago |
Reviewed
|
0|i32e53: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2513 | majorChange exceptions during leader sync |
Bug | Open | Critical | Unresolved | Mohammad Arshad | Alexander Shraer | Alexander Shraer | 13/Aug/16 20:46 | 05/Sep/16 19:02 | 3.5.2 | server | 0 | 4 | ZOOKEEPER-2172 | In Learner.java there are exceptions being thrown in case majorChange = true, i.e., a reconfig is encountered in the stream of updates from the leader. There may be two problems in the way such exceptions are thrown: 1. important actions, e.g., processTxn, will not be done if an exception is thrown 2. its unclear that the learner will be able to continue where it left off in the process of syncing with the leader, if that sync is interrupted by an exception. This requires further investigation. Whereas similar code in Follower and Observer is extensively tested, this code in Learner isn't tested as much. We could build on the test case developed in ZOOKEEPER-2172 to make sure this code works properly. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 28 weeks, 3 days ago | 0|i32aov: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2512 | Allow Jetty dependency to use HTTPS and Basic Auth |
Improvement | Open | Major | Unresolved | Unassigned | Edward Ribeiro | Edward Ribeiro | 12/Aug/16 14:27 | 05/Feb/20 07:17 | 3.5.2 | 3.7.0, 3.5.8 | server | 0 | 1 | ZOOKEEPER-2489 | If ZOOKEEPER-2489 gets committed then it would be nice to allow more flexible configuration of https and basic authentication to JettyAdminServer. | JettyAdminServer, security | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 31 weeks, 6 days ago | 0|i329sv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2511 | Implement AutoCloseable in ZooKeeper.java |
Improvement | Closed | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 11/Aug/16 17:31 | 17/May/17 23:44 | 09/Jan/17 16:50 | 3.5.3, 3.6.0 | 0 | 5 | As a java developer I would like to be able to use try-with-resource blocks with ZooKeeper objects in order to make closing sessions easier. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 10 weeks, 3 days ago | 0|i3287r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2510 | org.apache.zookeeper.server.NettyServerCnxnTest uses wrong import for junit |
Bug | Open | Major | Unresolved | Ted Dunning | Ted Dunning | Ted Dunning | 10/Aug/16 19:50 | 05/Feb/20 07:15 | 3.5.1, 3.5.2 | 3.7.0, 3.5.8 | 0 | 1 | junit.framework.Assert is deprecated. The code should use org.junit.Assert instead. Patch coming shortly. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 28 weeks ago | 0|i326cv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2509 | Secure mode leaks memory |
Bug | Patch Available | Major | Unresolved | Ted Dunning | Ted Dunning | Ted Dunning | 10/Aug/16 00:55 | 05/Feb/20 07:11 | 3.5.1, 3.5.2 | 3.7.0, 3.5.8 | 0 | 6 | ZOOKEEPER-2358 | The Netty connection handling logic fails to clean up watches on connection close. This causes memory to leak. I will have a repro script available soon and a fix. I am not sure how to build a unit test since we would need to build an entire server and generate keys and such. Advice on that appreciated. |
9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 2 years, 38 weeks, 1 day ago | 0|i324iv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2508 | Many ZooKeeper tests are flaky because they proceed with zk operation without connecting to ZooKeeper server. |
Test | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 09/Aug/16 09:40 | 17/May/17 23:44 | 23/Aug/16 19:37 | 3.5.3, 3.6.0 | tests | 0 | 5 | ZOOKEEPER-2135 | Many ZooKeeper tests are flaky because they proceed with zk operation without connecting to ZooKeeper server. Recently in our build {{org.apache.zookeeper.server.ZooKeeperServerMainTest.testStandalone()}} failed. After analyzing we found that it is failed because it is not waiting for ZooKeeper client get connected to server. In this case normally zookeeper client gets connected immediately but if not connected immediately the test case is bound to fail. Not only ZooKeeperServerMainTest but there are many other classes which have such test cases. This jira is to address all those test cases. |
9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 3 years, 30 weeks, 1 day ago |
Reviewed
|
0|i3239r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2507 | C unit test improvement: line break between 'ZooKeeper server started' and 'Running' |
Improvement | Closed | Minor | Fixed | Michael Han | Michael Han | Michael Han | 08/Aug/16 16:28 | 31/Mar/17 05:01 | 08/Sep/16 10:34 | 3.4.8, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | tests | 0 | 2 | Currently we have this on C unit test output: {noformat} ZooKeeper server startedRunning ...... OK {noformat} This is because the zkServer.sh, when eco 'ZooKeeper server started', does not put a line break at the end. It will be clearer for readers of the console output if we fix this by adding a line break in between, so we can separate the script output and the test output. After the fix the output would look like: {noformat} ZooKeeper server started Running ..... OK {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 28 weeks ago |
Reviewed
|
0|i321yv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2506 | Document use of read-only mode |
Improvement | Open | Minor | Unresolved | Benjamin Jaton | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 06/Aug/16 16:11 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | documentation | 0 | 2 | It would be a good addition to the documentation a simple example of how to code an application to use read-only mode. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 32 weeks, 5 days ago | 0|i3203j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2505 | Use shared library instead of static library in C client unit test |
Improvement | Closed | Minor | Fixed | Michael Han | Michael Han | Michael Han | 05/Aug/16 20:05 | 17/May/17 23:43 | 23/Aug/16 19:05 | 3.5.2 | 3.5.3, 3.6.0 | c client | 0 | 3 | ZOOKEEPER-1742, ZOOKEEPER-2518 | Currently we are statically linking c unit tests to ZK client library - we should use shared library instead as there seems no particular reason to stick to static library, plus one benefit of using shared library is that would allow us to use overrides function calls from standard libraries at link time so we can simulate the wrap option for ld linker on os x. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 30 weeks, 1 day ago |
Reviewed
|
0|i31zrj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2504 | Enforce that server ids are unique in a cluster |
Bug | Patch Available | Major | Unresolved | Michael Han | Dan Benediktson | Dan Benediktson | 05/Aug/16 18:48 | 23/Feb/19 05:11 | 0 | 5 | ZOOKEEPER-2503 | The leader will happily accept connections from learners that have the same server id (i.e., due to misconfiguration). This can lead to various issues including non-unique session_ids being generated by these servers. The leader can enforce that all learners come in with unique server IDs; if a learner attempts to connect with an id that is already in use, it should be denied. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 3 weeks, 5 days ago | Enforces ZAB protocol version 0x10000 on the leader from all connecting learners, thus must be upgraded from 3.4.0 or higher. | 0|i31znj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2503 | do a hard constraints on the number of myid which must between 0 and 255 |
Bug | Open | Major | Unresolved | maoling | Michael Han | Michael Han | 05/Aug/16 12:01 | 21/Apr/19 07:06 | 3.4.9, 3.5.2, 3.4.11 | server | 0 | 8 | 0 | 9000 | ZOOKEEPER-2901, ZOOKEEPER-2504, ZOOKEEPER-260 | In ZK documentation, we have:
"The myid file consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text "1" and nothing else. The id must be unique within the ensemble and should have a value between 1 and 255." This however is not enforced in code, which should be fixed either in documentation that we remove the restriction of the range 1-255 or in code we enforce such constraint. Discussion thread: [http://zookeeper-user.578899.n2.nabble.com/Is-myid-actually-limited-to-1-255-td7581270.html] |
100% | 100% | 9000 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 3 weeks, 4 days ago | 0|i31yy7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2502 | Flaky Test: org.apache.zookeeper.server.quorum.CnxManagerTest.testCnxFromFutureVersion |
Test | Closed | Major | Fixed | Michael Han | Michael Han | Michael Han | 05/Aug/16 11:45 | 31/Mar/17 05:01 | 06/Sep/16 13:24 | 3.4.9 | 3.4.10 | tests | 0 | 1 | ZOOKEEPER-2135 | {noformat} Error Message Broken pipe Stacktrace java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:466) at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) at java.nio.channels.Channels.writeFully(Channels.java:98) at java.nio.channels.Channels.access$000(Channels.java:61) at java.nio.channels.Channels$1.write(Channels.java:174) at java.io.OutputStream.write(OutputStream.java:75) at java.nio.channels.Channels$1.write(Channels.java:155) at java.io.DataOutputStream.writeInt(DataOutputStream.java:198) at org.apache.zookeeper.server.quorum.CnxManagerTest.testCnxFromFutureVersion(CnxManagerTest.java:318) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) Standard Output 2016-07-12 22:34:46,623 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testCnxFromFutureVersion 2016-07-12 22:34:46,627 [myid:] - INFO [main:PortAssignment@32] - assigning port 11221 2016-07-12 22:34:46,630 [myid:] - INFO [main:PortAssignment@32] - assigning port 11222 2016-07-12 22:34:46,631 [myid:] - INFO [main:PortAssignment@32] - assigning port 11223 2016-07-12 22:34:46,643 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:46,658 [myid:] - INFO [main:PortAssignment@32] - assigning port 11224 2016-07-12 22:34:46,658 [myid:] - INFO [main:PortAssignment@32] - assigning port 11225 2016-07-12 22:34:46,659 [myid:] - INFO [main:PortAssignment@32] - assigning port 11226 2016-07-12 22:34:46,659 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:46,659 [myid:] - INFO [main:PortAssignment@32] - assigning port 11227 2016-07-12 22:34:46,659 [myid:] - INFO [main:PortAssignment@32] - assigning port 11228 2016-07-12 22:34:46,659 [myid:] - INFO [main:PortAssignment@32] - assigning port 11229 2016-07-12 22:34:46,660 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:46,660 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testCnxFromFutureVersion 2016-07-12 22:34:46,672 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11225 2016-07-12 22:34:46,692 [myid:] - INFO [main:CnxManagerTest@301] - Election port: 11226 2016-07-12 22:34:46,692 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11226 2016-07-12 22:34:47,696 [myid:] - INFO [/0.0.0.0:11226:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:48724 2016-07-12 22:34:49,700 [myid:] - WARN [/0.0.0.0:11226:QuorumCnxManager@274] - Exception reading or writing challenge: java.net.SocketTimeoutException: Read timed out 2016-07-12 22:34:52,700 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@74] - TEST METHOD FAILED testCnxFromFutureVersion java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:466) at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) at java.nio.channels.Channels.writeFully(Channels.java:98) at java.nio.channels.Channels.access$000(Channels.java:61) at java.nio.channels.Channels$1.write(Channels.java:174) at java.io.OutputStream.write(OutputStream.java:75) at java.nio.channels.Channels$1.write(Channels.java:155) at java.io.DataOutputStream.writeInt(DataOutputStream.java:198) at org.apache.zookeeper.server.quorum.CnxManagerTest.testCnxFromFutureVersion(CnxManagerTest.java:318) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-12 22:34:52,705 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testCnxFromFutureVersion java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:466) at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) at java.nio.channels.Channels.writeFully(Channels.java:98) at java.nio.channels.Channels.access$000(Channels.java:61) at java.nio.channels.Channels$1.write(Channels.java:174) at java.io.OutputStream.write(OutputStream.java:75) at java.nio.channels.Channels$1.write(Channels.java:155) at java.io.DataOutputStream.writeInt(DataOutputStream.java:198) at org.apache.zookeeper.server.quorum.CnxManagerTest.testCnxFromFutureVersion(CnxManagerTest.java:318) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-12 22:34:52,706 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testCnxFromFutureVersion 2016-07-12 22:34:52,720 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testSocketTimeout 2016-07-12 22:34:52,720 [myid:] - INFO [main:PortAssignment@32] - assigning port 11230 2016-07-12 22:34:52,720 [myid:] - INFO [main:PortAssignment@32] - assigning port 11231 2016-07-12 22:34:52,720 [myid:] - INFO [main:PortAssignment@32] - assigning port 11232 2016-07-12 22:34:52,721 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:52,721 [myid:] - INFO [main:PortAssignment@32] - assigning port 11233 2016-07-12 22:34:52,722 [myid:] - INFO [main:PortAssignment@32] - assigning port 11234 2016-07-12 22:34:52,722 [myid:] - INFO [main:PortAssignment@32] - assigning port 11235 2016-07-12 22:34:52,722 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:52,723 [myid:] - INFO [main:PortAssignment@32] - assigning port 11236 2016-07-12 22:34:52,723 [myid:] - INFO [main:PortAssignment@32] - assigning port 11237 2016-07-12 22:34:52,723 [myid:] - INFO [main:PortAssignment@32] - assigning port 11238 2016-07-12 22:34:52,724 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:52,724 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testSocketTimeout 2016-07-12 22:34:52,725 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11234 2016-07-12 22:34:52,726 [myid:] - INFO [main:CnxManagerTest@370] - Election port: 11235 2016-07-12 22:34:52,726 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11235 2016-07-12 22:34:53,729 [myid:] - INFO [/0.0.0.0:11235:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:34496 2016-07-12 22:34:57,734 [myid:] - WARN [/0.0.0.0:11235:QuorumCnxManager@274] - Exception reading or writing challenge: java.net.SocketTimeoutException: Read timed out 2016-07-12 22:34:57,734 [myid:] - WARN [main:QuorumCnxManager@274] - Exception reading or writing challenge: java.io.EOFException 2016-07-12 22:34:57,734 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 3153 2016-07-12 22:34:57,735 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 6 2016-07-12 22:34:57,735 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testSocketTimeout 2016-07-12 22:34:57,735 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testSocketTimeout 2016-07-12 22:34:57,735 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testSocketTimeout 2016-07-12 22:34:57,735 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testWorkerThreads 2016-07-12 22:34:57,736 [myid:] - INFO [main:PortAssignment@32] - assigning port 11239 2016-07-12 22:34:57,736 [myid:] - INFO [main:PortAssignment@32] - assigning port 11240 2016-07-12 22:34:57,736 [myid:] - INFO [main:PortAssignment@32] - assigning port 11241 2016-07-12 22:34:57,736 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:57,736 [myid:] - INFO [main:PortAssignment@32] - assigning port 11242 2016-07-12 22:34:57,737 [myid:] - INFO [main:PortAssignment@32] - assigning port 11243 2016-07-12 22:34:57,737 [myid:] - INFO [main:PortAssignment@32] - assigning port 11244 2016-07-12 22:34:57,737 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:57,737 [myid:] - INFO [main:PortAssignment@32] - assigning port 11245 2016-07-12 22:34:57,737 [myid:] - INFO [main:PortAssignment@32] - assigning port 11246 2016-07-12 22:34:57,737 [myid:] - INFO [main:PortAssignment@32] - assigning port 11247 2016-07-12 22:34:57,737 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:57,738 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testWorkerThreads 2016-07-12 22:34:57,738 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11240 2016-07-12 22:34:57,784 [myid:] - INFO [main:CnxManagerTest@393] - Starting peer 0 2016-07-12 22:34:57,789 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-12 22:34:57,905 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-12 22:34:57,945 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11241 2016-07-12 22:34:57,953 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11243 2016-07-12 22:34:57,954 [myid:] - INFO [main:CnxManagerTest@393] - Starting peer 1 2016-07-12 22:34:57,954 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-12 22:34:58,024 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:QuorumPeer@774] - LOOKING 2016-07-12 22:34:58,025 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-12 22:34:58,025 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:FastLeaderElection@818] - New election. My id = 0, proposed zxid=0x0 2016-07-12 22:34:58,028 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,028 [myid:] - WARN [WorkerSender[myid=0]:QuorumCnxManager@400] - Cannot open channel to 1 at election address /0.0.0.0:11244 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:744) 2016-07-12 22:34:58,029 [myid:] - INFO [WorkerSender[myid=0]:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:58,030 [myid:] - WARN [WorkerSender[myid=0]:QuorumCnxManager@400] - Cannot open channel to 2 at election address /0.0.0.0:11247 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:744) 2016-07-12 22:34:58,030 [myid:] - INFO [WorkerSender[myid=0]:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:58,050 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11244 2016-07-12 22:34:58,052 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11246 2016-07-12 22:34:58,052 [myid:] - INFO [main:CnxManagerTest@393] - Starting peer 2 2016-07-12 22:34:58,053 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-12 22:34:58,057 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:QuorumPeer@774] - LOOKING 2016-07-12 22:34:58,058 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:FastLeaderElection@818] - New election. My id = 1, proposed zxid=0x0 2016-07-12 22:34:58,058 [myid:] - INFO [/0.0.0.0:11241:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:46871 2016-07-12 22:34:58,063 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,064 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,064 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,065 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,065 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,065 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@400] - Cannot open channel to 2 at election address /0.0.0.0:11247 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:744) 2016-07-12 22:34:58,065 [myid:] - WARN [WorkerSender[myid=0]:QuorumCnxManager@400] - Cannot open channel to 2 at election address /0.0.0.0:11247 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:744) 2016-07-12 22:34:58,066 [myid:] - INFO [WorkerSender[myid=1]:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:58,066 [myid:] - INFO [WorkerSender[myid=0]:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:58,084 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-12 22:34:58,133 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11247 2016-07-12 22:34:58,136 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:QuorumPeer@774] - LOOKING 2016-07-12 22:34:58,136 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:FastLeaderElection@818] - New election. My id = 2, proposed zxid=0x0 2016-07-12 22:34:58,137 [myid:] - INFO [/0.0.0.0:11241:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:46874 2016-07-12 22:34:58,138 [myid:] - INFO [/0.0.0.0:11244:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:33135 2016-07-12 22:34:58,141 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,143 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,143 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,143 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,144 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,144 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,144 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,144 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,144 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,145 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,145 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:58,345 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:QuorumPeer@844] - FOLLOWING 2016-07-12 22:34:58,345 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:QuorumPeer@844] - FOLLOWING 2016-07-12 22:34:58,346 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:QuorumPeer@856] - LEADING 2016-07-12 22:34:58,349 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Learner@86] - TCP NoDelay set to: true 2016-07-12 22:34:58,350 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:Leader@59] - TCP NoDelay set to: true 2016-07-12 22:34:58,354 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:zookeeper.version=3.4.9-SNAPSHOT-1752356, built on 07/12/2016 22:30 GMT 2016-07-12 22:34:58,354 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:host.name=hemera.apache.org 2016-07-12 22:34:58,354 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:java.version=1.8.0 2016-07-12 22:34:58,354 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:java.vendor=Oracle Corporation 2016-07-12 22:34:58,354 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:java.home=/x1/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.8.0/jre 2016-07-12 22:34:58,354 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:java.class.path=/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/classes:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/antlr-2.7.6.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/checkstyle-5.0.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/commons-beanutils-core-1.7.0.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/commons-cli-1.0.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/commons-collections-3.2.2.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/commons-lang-1.0.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/commons-logging-1.0.3.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/google-collections-0.9.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/junit-4.8.1.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/lib/mockito-all-1.8.2.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/classes:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/lib/jline-0.9.94.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/lib/log4j-1.2.16.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/lib/netty-3.10.5.Final.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/lib/slf4j-api-1.6.1.jar:/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/lib/slf4j-log4j12-1.6.1.jar:/home/jenkins/tools/clover/latest/lib/clover.jar:/x1/jenkins/tools/ant/apache-ant-1.8.2/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-07-12 22:34:58,355 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-07-12 22:34:58,355 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:java.io.tmpdir=/tmp 2016-07-12 22:34:58,355 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:java.compiler=<NA> 2016-07-12 22:34:58,355 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:os.name=Linux 2016-07-12 22:34:58,355 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:os.arch=amd64 2016-07-12 22:34:58,355 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:os.version=3.2.0-104-generic 2016-07-12 22:34:58,355 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:user.name=jenkins 2016-07-12 22:34:58,355 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:user.home=/home/jenkins 2016-07-12 22:34:58,355 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Environment@100] - Server environment:user.dir=/x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4 2016-07-12 22:34:58,358 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:ZooKeeperServer@170] - Created server with tickTime 1000 minSessionTimeout 2000 maxSessionTimeout 20000 datadir /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test6977370939981061612.junit.dir/version-2 snapdir /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test6977370939981061612.junit.dir/version-2 2016-07-12 22:34:58,358 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:ZooKeeperServer@170] - Created server with tickTime 1000 minSessionTimeout 2000 maxSessionTimeout 20000 datadir /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test4996523116536588259.junit.dir/version-2 snapdir /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test4996523116536588259.junit.dir/version-2 2016-07-12 22:34:58,358 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:ZooKeeperServer@170] - Created server with tickTime 1000 minSessionTimeout 2000 maxSessionTimeout 20000 datadir /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test8108741215991761367.junit.dir/version-2 snapdir /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test8108741215991761367.junit.dir/version-2 2016-07-12 22:34:58,359 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 301 2016-07-12 22:34:58,359 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 334 2016-07-12 22:34:58,365 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:Leader@361] - LEADING - LEADER ELECTION TOOK - 224 2016-07-12 22:34:58,370 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:58,373 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:58,383 [myid:] - INFO [LearnerHandler-/140.211.11.27:45223:LearnerHandler@329] - Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@4e46c644 2016-07-12 22:34:58,383 [myid:] - INFO [LearnerHandler-/140.211.11.27:45222:LearnerHandler@329] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@23da3d 2016-07-12 22:34:58,528 [myid:] - INFO [LearnerHandler-/140.211.11.27:45222:LearnerHandler@384] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2016-07-12 22:34:58,528 [myid:] - INFO [LearnerHandler-/140.211.11.27:45223:LearnerHandler@384] - Synchronizing with Follower sid: 1 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2016-07-12 22:34:58,529 [myid:] - INFO [LearnerHandler-/140.211.11.27:45222:LearnerHandler@393] - leader and follower are in sync, zxid=0x0 2016-07-12 22:34:58,529 [myid:] - INFO [LearnerHandler-/140.211.11.27:45223:LearnerHandler@393] - leader and follower are in sync, zxid=0x0 2016-07-12 22:34:58,529 [myid:] - INFO [LearnerHandler-/140.211.11.27:45223:LearnerHandler@458] - Sending DIFF 2016-07-12 22:34:58,529 [myid:] - INFO [LearnerHandler-/140.211.11.27:45222:LearnerHandler@458] - Sending DIFF 2016-07-12 22:34:58,530 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Learner@326] - Getting a diff from the leader 0x0 2016-07-12 22:34:58,530 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:Learner@326] - Getting a diff from the leader 0x0 2016-07-12 22:34:58,534 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:FileTxnSnapLog@240] - Snapshotting: 0x0 to /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test6977370939981061612.junit.dir/version-2/snapshot.0 2016-07-12 22:34:58,534 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:FileTxnSnapLog@240] - Snapshotting: 0x0 to /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test8108741215991761367.junit.dir/version-2/snapshot.0 2016-07-12 22:34:58,623 [myid:] - INFO [LearnerHandler-/140.211.11.27:45222:LearnerHandler@518] - Received NEWLEADER-ACK message from 0 2016-07-12 22:34:58,623 [myid:] - INFO [LearnerHandler-/140.211.11.27:45223:LearnerHandler@518] - Received NEWLEADER-ACK message from 1 2016-07-12 22:34:58,623 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:Leader@946] - Have quorum of supporters, sids: [ 0,2 ]; starting up and setting last processed zxid: 0x100000000 2016-07-12 22:34:58,634 [myid:] - INFO [main:CnxManagerTest@405] - Round 0, halting peer 0 2016-07-12 22:34:58,634 [myid:] - INFO [main:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:891) at org.apache.zookeeper.server.quorum.CnxManagerTest.testWorkerThreads(CnxManagerTest.java:407) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-12 22:34:58,635 [myid:] - INFO [main:FollowerZooKeeperServer@140] - Shutting down 2016-07-12 22:34:58,645 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11240:NIOServerCnxnFactory@219] - NIOServerCnxn factory exited run method 2016-07-12 22:34:58,645 [myid:] - INFO [main:FollowerZooKeeperServer@140] - Shutting down 2016-07-12 22:34:58,647 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-12 22:34:58,647 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850) 2016-07-12 22:34:58,648 [myid:] - INFO [main:FollowerRequestProcessor@107] - Shutting down 2016-07-12 22:34:58,648 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:FollowerZooKeeperServer@140] - Shutting down 2016-07-12 22:34:58,648 [myid:] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@97] - FollowerRequestProcessor exited loop! 2016-07-12 22:34:58,648 [myid:] - INFO [main:CommitProcessor@184] - Shutting down 2016-07-12 22:34:58,649 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-12 22:34:58,649 [myid:] - INFO [CommitProcessor:0:CommitProcessor@153] - CommitProcessor exited loop! 2016-07-12 22:34:58,650 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-12 22:34:58,650 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:SyncRequestProcessor@209] - Shutting down 2016-07-12 22:34:58,651 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-12 22:34:58,651 [myid:] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:QuorumPeer@874] - QuorumPeer main thread exited 2016-07-12 22:34:58,651 [myid:] - ERROR [/0.0.0.0:11241:QuorumCnxManager$Listener@547] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:404) at java.net.ServerSocket.implAccept(ServerSocket.java:545) at java.net.ServerSocket.accept(ServerSocket.java:513) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:539) 2016-07-12 22:34:58,652 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:34:58,653 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@810] - Connection broken for id 2, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.net.SocketInputStream.read(SocketInputStream.java:203) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:34:58,654 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:34:58,653 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@810] - Connection broken for id 0, my id = 2, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:34:58,653 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:34:58,656 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:34:58,652 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@810] - Connection broken for id 0, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:34:58,656 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:34:58,652 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@810] - Connection broken for id 1, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.net.SocketInputStream.read(SocketInputStream.java:203) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:34:58,657 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:34:58,657 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:34:58,658 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:34:58,655 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:34:58,654 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:34:58,659 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:34:58,659 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:34:59,154 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11240 2016-07-12 22:34:59,155 [myid:] - INFO [main:CnxManagerTest@414] - Round {}, restarting peer {}[Ljava.lang.Object;@12bc6874 2016-07-12 22:34:59,157 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test8108741215991761367.junit.dir/version-2/snapshot.0 2016-07-12 22:34:59,159 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11241 2016-07-12 22:34:59,161 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:QuorumPeer@774] - LOOKING 2016-07-12 22:34:59,162 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:FastLeaderElection@818] - New election. My id = 0, proposed zxid=0x0 2016-07-12 22:34:59,162 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:59,163 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (1, 0) 2016-07-12 22:34:59,163 [myid:] - INFO [/0.0.0.0:11244:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:33142 2016-07-12 22:34:59,163 [myid:] - INFO [/0.0.0.0:11247:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:44785 2016-07-12 22:34:59,163 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (2, 0) 2016-07-12 22:34:59,164 [myid:] - INFO [/0.0.0.0:11241:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:46884 2016-07-12 22:34:59,166 [myid:] - INFO [/0.0.0.0:11241:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:46885 2016-07-12 22:34:59,167 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEpoch) FOLLOWING (my state) 2016-07-12 22:34:59,169 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEpoch) LEADING (my state) 2016-07-12 22:34:59,169 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:59,169 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:59,169 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:59,170 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LEADING (n.state), 2 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state) 2016-07-12 22:34:59,170 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:QuorumPeer@844] - FOLLOWING 2016-07-12 22:34:59,170 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:ZooKeeperServer@170] - Created server with tickTime 1000 minSessionTimeout 2000 maxSessionTimeout 20000 datadir /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test8108741215991761367.junit.dir/version-2 snapdir /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test8108741215991761367.junit.dir/version-2 2016-07-12 22:34:59,170 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 8 2016-07-12 22:34:59,171 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:34:59,172 [myid:] - INFO [LearnerHandler-/140.211.11.27:45228:LearnerHandler@329] - Follower sid: 0 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@23da3d 2016-07-12 22:34:59,172 [myid:] - INFO [LearnerHandler-/140.211.11.27:45228:LearnerHandler@384] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x0 2016-07-12 22:34:59,172 [myid:] - INFO [LearnerHandler-/140.211.11.27:45228:LearnerHandler@458] - Sending SNAP 2016-07-12 22:34:59,172 [myid:] - INFO [LearnerHandler-/140.211.11.27:45228:LearnerHandler@482] - Sending snapshot last zxid of peer is 0x0 zxid of leader is 0x100000000sent zxid of db as 0x100000000 2016-07-12 22:34:59,172 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:Learner@329] - Getting a snapshot from leader 2016-07-12 22:34:59,174 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:FileTxnSnapLog@240] - Snapshotting: 0x100000000 to /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test8108741215991761367.junit.dir/version-2/snapshot.100000000 2016-07-12 22:34:59,284 [myid:] - INFO [LearnerHandler-/140.211.11.27:45228:LearnerHandler@518] - Received NEWLEADER-ACK message from 0 2016-07-12 22:34:59,652 [myid:] - INFO [/0.0.0.0:11241:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-12 22:34:59,660 [myid:] - INFO [main:CnxManagerTest@405] - Round 1, halting peer 0 2016-07-12 22:34:59,660 [myid:] - INFO [main:Follower@166] - shutdown called j ...[truncated 254755 chars]... Keeper_branch34_jdk8/branch-3.4/build/test/tmp/test4996523116536588259.junit.dir/version-2 snapdir /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test4996523116536588259.junit.dir/version-2 2016-07-12 22:35:13,328 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 3 2016-07-12 22:35:13,329 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:13,329 [myid:] - INFO [LearnerHandler-/140.211.11.27:32997:LearnerHandler@329] - Follower sid: 2 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@6f23fc6d 2016-07-12 22:35:13,330 [myid:] - INFO [LearnerHandler-/140.211.11.27:32997:LearnerHandler@384] - Synchronizing with Follower sid: 2 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x200000000 2016-07-12 22:35:13,330 [myid:] - INFO [LearnerHandler-/140.211.11.27:32997:LearnerHandler@393] - leader and follower are in sync, zxid=0x200000000 2016-07-12 22:35:13,330 [myid:] - INFO [LearnerHandler-/140.211.11.27:32997:LearnerHandler@458] - Sending DIFF 2016-07-12 22:35:13,330 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:Learner@326] - Getting a diff from the leader 0x200000000 2016-07-12 22:35:13,331 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:FileTxnSnapLog@240] - Snapshotting: 0x200000000 to /x1/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk8/branch-3.4/build/test/tmp/test4996523116536588259.junit.dir/version-2/snapshot.200000000 2016-07-12 22:35:13,376 [myid:] - INFO [LearnerHandler-/140.211.11.27:32997:LearnerHandler@518] - Received NEWLEADER-ACK message from 2 2016-07-12 22:35:13,821 [myid:] - INFO [/0.0.0.0:11247:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-12 22:35:13,824 [myid:] - INFO [main:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:891) at org.apache.zookeeper.server.quorum.CnxManagerTest.testWorkerThreads(CnxManagerTest.java:424) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-12 22:35:13,824 [myid:] - INFO [main:FollowerZooKeeperServer@140] - Shutting down 2016-07-12 22:35:13,824 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-12 22:35:13,825 [myid:] - INFO [main:FollowerRequestProcessor@107] - Shutting down 2016-07-12 22:35:13,825 [myid:] - INFO [main:CommitProcessor@184] - Shutting down 2016-07-12 22:35:13,825 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-12 22:35:13,825 [myid:] - INFO [CommitProcessor:0:CommitProcessor@153] - CommitProcessor exited loop! 2016-07-12 22:35:13,826 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-12 22:35:13,825 [myid:] - INFO [FollowerRequestProcessor:0:FollowerRequestProcessor@97] - FollowerRequestProcessor exited loop! 2016-07-12 22:35:13,826 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-12 22:35:13,827 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11240:NIOServerCnxnFactory@219] - NIOServerCnxn factory exited run method 2016-07-12 22:35:13,828 [myid:] - ERROR [/0.0.0.0:11241:QuorumCnxManager$Listener@547] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:404) at java.net.ServerSocket.implAccept(ServerSocket.java:545) at java.net.ServerSocket.accept(ServerSocket.java:513) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:539) 2016-07-12 22:35:13,829 [myid:] - INFO [main:Leader@496] - Shutting down 2016-07-12 22:35:13,828 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@810] - Connection broken for id 0, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:35:13,829 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:35:13,828 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@810] - Connection broken for id 0, my id = 2, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:35:13,829 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:35:13,828 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:35:13,830 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:35:13,828 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:35:13,830 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:35:13,828 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@810] - Connection broken for id 1, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.net.SocketInputStream.read(SocketInputStream.java:203) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:35:13,830 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:35:13,830 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:35:13,830 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:35:13,829 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:35:13,831 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:35:13,829 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@810] - Connection broken for id 2, my id = 0, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.net.SocketInputStream.read(SocketInputStream.java:203) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:35:13,831 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:35:13,829 [myid:] - INFO [main:Leader@502] - Shutdown called java.lang.Exception: shutdown Leader! reason: quorum Peer shutdown at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:502) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:888) at org.apache.zookeeper.server.quorum.CnxManagerTest.testWorkerThreads(CnxManagerTest.java:424) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-12 22:35:13,831 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-12 22:35:13,831 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-12 22:35:13,832 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-12 22:35:13,832 [myid:] - INFO [main:ProposalRequestProcessor@88] - Shutting down 2016-07-12 22:35:13,832 [myid:] - INFO [main:CommitProcessor@184] - Shutting down 2016-07-12 22:35:13,832 [myid:] - INFO [main:Leader$ToBeAppliedRequestProcessor@661] - Shutting down 2016-07-12 22:35:13,832 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-12 22:35:13,832 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-12 22:35:13,831 [myid:] - INFO [LearnerCnxAcceptor-/0.0.0.0:11242:Leader$LearnerCnxAcceptor@325] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2016-07-12 22:35:13,832 [myid:] - INFO [SyncThread:1:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-12 22:35:13,832 [myid:] - INFO [CommitProcessor:1:CommitProcessor@153] - CommitProcessor exited loop! 2016-07-12 22:35:13,832 [myid:] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-12 22:35:13,834 [myid:] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:Follower@89] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:846) 2016-07-12 22:35:13,833 [myid:] - WARN [LearnerHandler-/140.211.11.27:32993:LearnerHandler@644] - ******* GOODBYE /140.211.11.27:32993 ******** 2016-07-12 22:35:13,835 [myid:] - WARN [LearnerHandler-/140.211.11.27:32990:LearnerHandler@644] - ******* GOODBYE /140.211.11.27:32990 ******** 2016-07-12 22:35:13,834 [myid:] - WARN [LearnerHandler-/140.211.11.27:32980:LearnerHandler@644] - ******* GOODBYE /140.211.11.27:32980 ******** 2016-07-12 22:35:13,835 [myid:] - WARN [LearnerHandler-/140.211.11.27:32980:LearnerHandler@656] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:339) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:654) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:647) 2016-07-12 22:35:13,834 [myid:] - WARN [LearnerHandler-/140.211.11.27:32997:LearnerHandler@644] - ******* GOODBYE /140.211.11.27:32997 ******** 2016-07-12 22:35:13,835 [myid:] - WARN [LearnerHandler-/140.211.11.27:32997:LearnerHandler@656] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:339) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:654) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:647) 2016-07-12 22:35:13,835 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11243:NIOServerCnxnFactory@219] - NIOServerCnxn factory exited run method 2016-07-12 22:35:13,835 [myid:] - WARN [LearnerHandler-/140.211.11.27:32990:LearnerHandler@656] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:339) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:654) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:647) 2016-07-12 22:35:13,836 [myid:] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:QuorumPeer@862] - Unexpected exception java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:456) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:859) 2016-07-12 22:35:13,836 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:Leader@496] - Shutting down 2016-07-12 22:35:13,836 [myid:] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11243:QuorumPeer@874] - QuorumPeer main thread exited 2016-07-12 22:35:13,835 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850) 2016-07-12 22:35:13,837 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:FollowerZooKeeperServer@140] - Shutting down 2016-07-12 22:35:13,837 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:ZooKeeperServer@469] - shutting down 2016-07-12 22:35:13,837 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:FollowerRequestProcessor@107] - Shutting down 2016-07-12 22:35:13,835 [myid:] - WARN [LearnerHandler-/140.211.11.27:32993:LearnerHandler@656] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:339) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:654) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:647) 2016-07-12 22:35:13,835 [myid:] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:Follower@89] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:846) 2016-07-12 22:35:13,837 [myid:] - INFO [FollowerRequestProcessor:2:FollowerRequestProcessor@97] - FollowerRequestProcessor exited loop! 2016-07-12 22:35:13,838 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850) 2016-07-12 22:35:13,837 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:CommitProcessor@184] - Shutting down 2016-07-12 22:35:13,839 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-12 22:35:13,839 [myid:] - INFO [CommitProcessor:2:CommitProcessor@153] - CommitProcessor exited loop! 2016-07-12 22:35:13,837 [myid:] - INFO [main:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:891) at org.apache.zookeeper.server.quorum.CnxManagerTest.testWorkerThreads(CnxManagerTest.java:424) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-12 22:35:13,839 [myid:] - INFO [main:FollowerZooKeeperServer@140] - Shutting down 2016-07-12 22:35:13,839 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-12 22:35:13,836 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@810] - Connection broken for id 2, my id = 1, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.net.SocketInputStream.read(SocketInputStream.java:203) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:35:13,840 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:35:13,836 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@810] - Connection broken for id 1, my id = 2, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:35:13,840 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:35:13,836 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:35:13,840 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:35:13,836 [myid:] - ERROR [/0.0.0.0:11244:QuorumCnxManager$Listener@547] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:404) at java.net.ServerSocket.implAccept(ServerSocket.java:545) at java.net.ServerSocket.accept(ServerSocket.java:513) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:539) 2016-07-12 22:35:13,840 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:35:13,841 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:35:13,840 [myid:] - INFO [SyncThread:2:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-12 22:35:13,839 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:SyncRequestProcessor@209] - Shutting down 2016-07-12 22:35:13,841 [myid:] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11246:QuorumPeer@874] - QuorumPeer main thread exited 2016-07-12 22:35:13,841 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11246:NIOServerCnxnFactory@219] - NIOServerCnxn factory exited run method 2016-07-12 22:35:13,839 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:FollowerZooKeeperServer@140] - Shutting down 2016-07-12 22:35:13,842 [myid:] - INFO [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:SyncRequestProcessor@209] - Shutting down 2016-07-12 22:35:13,842 [myid:] - WARN [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:11240:QuorumPeer@874] - QuorumPeer main thread exited 2016-07-12 22:35:13,842 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 11673 2016-07-12 22:35:13,842 [myid:] - ERROR [/0.0.0.0:11247:QuorumCnxManager$Listener@547] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:404) at java.net.ServerSocket.implAccept(ServerSocket.java:545) at java.net.ServerSocket.accept(ServerSocket.java:513) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:539) 2016-07-12 22:35:13,842 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 20 2016-07-12 22:35:13,842 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testWorkerThreads 2016-07-12 22:35:13,842 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testWorkerThreads 2016-07-12 22:35:13,843 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testWorkerThreads 2016-07-12 22:35:13,843 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testCnxManager 2016-07-12 22:35:13,843 [myid:] - INFO [main:PortAssignment@32] - assigning port 11248 2016-07-12 22:35:13,843 [myid:] - INFO [main:PortAssignment@32] - assigning port 11249 2016-07-12 22:35:13,843 [myid:] - INFO [main:PortAssignment@32] - assigning port 11250 2016-07-12 22:35:13,843 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:13,844 [myid:] - INFO [main:PortAssignment@32] - assigning port 11251 2016-07-12 22:35:13,844 [myid:] - INFO [main:PortAssignment@32] - assigning port 11252 2016-07-12 22:35:13,844 [myid:] - INFO [main:PortAssignment@32] - assigning port 11253 2016-07-12 22:35:13,844 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:13,844 [myid:] - INFO [main:PortAssignment@32] - assigning port 11254 2016-07-12 22:35:13,844 [myid:] - INFO [main:PortAssignment@32] - assigning port 11255 2016-07-12 22:35:13,844 [myid:] - INFO [main:PortAssignment@32] - assigning port 11256 2016-07-12 22:35:13,844 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:13,844 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testCnxManager 2016-07-12 22:35:13,845 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11252 2016-07-12 22:35:13,846 [myid:] - INFO [Thread-19:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11249 2016-07-12 22:35:13,846 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11253 2016-07-12 22:35:13,846 [myid:] - WARN [main:QuorumCnxManager@400] - Cannot open channel to 0 at election address /0.0.0.0:11250 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.CnxManagerTest.testCnxManager(CnxManagerTest.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-12 22:35:13,847 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:13,847 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11250 2016-07-12 22:35:13,847 [myid:] - INFO [/0.0.0.0:11253:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:41746 2016-07-12 22:35:13,847 [myid:] - INFO [Thread-19:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (1, 0) 2016-07-12 22:35:13,848 [myid:] - INFO [/0.0.0.0:11250:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:46883 2016-07-12 22:35:13,849 [myid:] - INFO [/0.0.0.0:11253:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:41748 2016-07-12 22:35:13,849 [myid:] - INFO [Thread-19:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (1, 0) 2016-07-12 22:35:13,849 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:35:13,849 [myid:] - INFO [/0.0.0.0:11250:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:46885 2016-07-12 22:35:13,849 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:35:13,849 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@810] - Connection broken for id 0, my id = 1, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.net.SocketInputStream.read(SocketInputStream.java:203) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:35:13,850 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:35:13,849 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@810] - Connection broken for id 1, my id = 0, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-12 22:35:13,850 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:35:13,850 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:35:13,851 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:35:13,851 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 12332 2016-07-12 22:35:13,851 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 26 2016-07-12 22:35:13,851 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testCnxManager 2016-07-12 22:35:13,851 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testCnxManager 2016-07-12 22:35:13,851 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testCnxManager 2016-07-12 22:35:13,862 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testCnxManagerTimeout 2016-07-12 22:35:13,862 [myid:] - INFO [main:PortAssignment@32] - assigning port 11257 2016-07-12 22:35:13,862 [myid:] - INFO [main:PortAssignment@32] - assigning port 11258 2016-07-12 22:35:13,862 [myid:] - INFO [main:PortAssignment@32] - assigning port 11259 2016-07-12 22:35:13,862 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:13,862 [myid:] - INFO [main:PortAssignment@32] - assigning port 11260 2016-07-12 22:35:13,862 [myid:] - INFO [main:PortAssignment@32] - assigning port 11261 2016-07-12 22:35:13,862 [myid:] - INFO [main:PortAssignment@32] - assigning port 11262 2016-07-12 22:35:13,862 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:13,863 [myid:] - INFO [main:PortAssignment@32] - assigning port 11263 2016-07-12 22:35:13,863 [myid:] - INFO [main:PortAssignment@32] - assigning port 11264 2016-07-12 22:35:13,863 [myid:] - INFO [main:PortAssignment@32] - assigning port 11265 2016-07-12 22:35:13,863 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:13,863 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testCnxManagerTimeout 2016-07-12 22:35:13,863 [myid:] - INFO [main:PortAssignment@32] - assigning port 11266 2016-07-12 22:35:13,863 [myid:] - INFO [main:CnxManagerTest@171] - This is the dead address I'm trying: 192.0.2.179 2016-07-12 22:35:13,863 [myid:] - INFO [main:PortAssignment@32] - assigning port 11267 2016-07-12 22:35:13,863 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 192.0.2.179 to address: /192.0.2.179 2016-07-12 22:35:13,864 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11261 2016-07-12 22:35:13,864 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11262 2016-07-12 22:35:14,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-12 22:35:14,310 [myid:] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-12 22:35:14,311 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-12 22:35:14,829 [myid:] - INFO [/0.0.0.0:11241:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-12 22:35:14,840 [myid:] - INFO [/0.0.0.0:11244:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-12 22:35:14,842 [myid:] - INFO [/0.0.0.0:11247:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-12 22:35:15,319 [myid:] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-12 22:35:15,320 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-12 22:35:16,327 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-12 22:35:16,327 [myid:] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-12 22:35:16,327 [myid:] - INFO [WorkerSender[myid=0]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-12 22:35:16,328 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-12 22:35:16,328 [myid:] - INFO [WorkerSender[myid=1]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-12 22:35:16,328 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-12 22:35:18,869 [myid:] - WARN [main:QuorumCnxManager@400] - Cannot open channel to 2 at election address /192.0.2.179:11267 java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.CnxManagerTest.testCnxManagerTimeout(CnxManagerTest.java:187) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-12 22:35:18,870 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 192.0.2.179 to address: /192.0.2.179 2016-07-12 22:35:18,870 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 12750 2016-07-12 22:35:18,870 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 13 2016-07-12 22:35:18,871 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testCnxManagerTimeout 2016-07-12 22:35:18,871 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testCnxManagerTimeout 2016-07-12 22:35:18,871 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testCnxManagerTimeout 2016-07-12 22:35:18,871 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testCnxManagerSpinLock 2016-07-12 22:35:18,872 [myid:] - INFO [main:PortAssignment@32] - assigning port 11268 2016-07-12 22:35:18,872 [myid:] - INFO [main:PortAssignment@32] - assigning port 11269 2016-07-12 22:35:18,872 [myid:] - INFO [main:PortAssignment@32] - assigning port 11270 2016-07-12 22:35:18,872 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:18,872 [myid:] - INFO [main:PortAssignment@32] - assigning port 11271 2016-07-12 22:35:18,872 [myid:] - INFO [main:PortAssignment@32] - assigning port 11272 2016-07-12 22:35:18,872 [myid:] - INFO [main:PortAssignment@32] - assigning port 11273 2016-07-12 22:35:18,873 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:18,873 [myid:] - INFO [main:PortAssignment@32] - assigning port 11274 2016-07-12 22:35:18,873 [myid:] - INFO [main:PortAssignment@32] - assigning port 11275 2016-07-12 22:35:18,873 [myid:] - INFO [main:PortAssignment@32] - assigning port 11276 2016-07-12 22:35:18,873 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0 2016-07-12 22:35:18,874 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testCnxManagerSpinLock 2016-07-12 22:35:18,874 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11272 2016-07-12 22:35:18,875 [myid:] - INFO [main:CnxManagerTest@214] - Election port: 11273 2016-07-12 22:35:18,875 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /0.0.0.0:11273 2016-07-12 22:35:19,876 [myid:] - INFO [/0.0.0.0:11273:QuorumCnxManager$Listener@541] - Received connection request /140.211.11.27:37939 2016-07-12 22:35:19,876 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@810] - Connection broken for id 2, my id = 1, error = java.io.IOException: Received packet with invalid packet: -20 at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:797) 2016-07-12 22:35:19,877 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-12 22:35:19,877 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-12 22:35:19,877 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-12 22:35:20,876 [myid:] - INFO [main:CnxManagerTest@249] - Socket has been closed as expected 2016-07-12 22:35:20,876 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 12944 2016-07-12 22:35:20,876 [myid:] - ERROR [/0.0.0.0:11273:QuorumCnxManager$Listener@547] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:404) at java.net.ServerSocket.implAccept(ServerSocket.java:545) at java.net.ServerSocket.accept(ServerSocket.java:513) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:539) 2016-07-12 22:35:20,877 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 14 2016-07-12 22:35:20,877 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testCnxManagerSpinLock 2016-07-12 22:35:20,877 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testCnxManagerSpinLock 2016-07-12 22:35:20,877 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testCnxManagerSpinLock {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 28 weeks, 2 days ago |
Reviewed
|
0|i31yvj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2501 | Improve ZooKeeperServerListenerImpl#notifyStopping() logging |
Improvement | Open | Minor | Unresolved | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 04/Aug/16 14:43 | 04/Aug/16 15:03 | 3.4.8, 3.5.2 | server | 0 | 1 | This jira is to address [~fpj]'s comment. {code} ZooKeeperCriticalThread#handleException() logs an error message including a throwable. Again, in ZooKeeperServerListenerImpl#notifyStopping(), it logs an info message with the exit code. Here it is better to consolidate the logging and have only a log error message here in this #notifyStopping method. We would need to change the signature and pass a throwable, though. {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 33 weeks ago | 0|i31xdj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2500 | Fix compilation warnings for CliException classes |
Bug | Closed | Major | Fixed | Abraham Fine | Abraham Fine | Abraham Fine | 03/Aug/16 11:39 | 17/May/17 23:44 | 05/Aug/16 17:07 | 3.5.3 | 3.5.3, 3.6.0 | 0 | 3 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 32 weeks, 6 days ago |
Reviewed
|
0|i31uzr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2499 | ZOOKEEPER-3170 Flaky Test: org.apache.zookeeper.test.SSLTest.testSecureQuorumServer |
Sub-task | Closed | Major | Cannot Reproduce | Andor Molnar | Michael Han | Michael Han | 03/Aug/16 01:44 | 19/Dec/19 18:01 | 25/Oct/18 11:20 | 3.5.2 | 3.5.5 | tests | 1 | 3 | {noformat} Error Message waiting for server 0 being up Stacktrace junit.framework.AssertionFailedError: waiting for server 0 being up at org.apache.zookeeper.test.SSLTest.testSecureQuorumServer(SSLTest.java:100) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) Standard Output 2016-08-03 05:33:41,529 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-08-03 05:33:41,594 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-08-03 05:33:41,608 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testSecureQuorumServer 2016-08-03 05:33:41,611 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testSecureQuorumServer 2016-08-03 05:33:41,614 [myid:] - INFO [main:PortAssignment@151] - Test process 8/8 using ports from 30072 - 32764. 2016-08-03 05:33:41,616 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30073 from range 30072 - 32764. 2016-08-03 05:33:41,617 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30074 from range 30072 - 32764. 2016-08-03 05:33:41,617 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30075 from range 30072 - 32764. 2016-08-03 05:33:41,617 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30076 from range 30072 - 32764. 2016-08-03 05:33:41,618 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30077 from range 30072 - 32764. 2016-08-03 05:33:41,618 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30078 from range 30072 - 32764. 2016-08-03 05:33:41,618 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30079 from range 30072 - 32764. 2016-08-03 05:33:41,619 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30080 from range 30072 - 32764. 2016-08-03 05:33:41,619 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30081 from range 30072 - 32764. 2016-08-03 05:33:41,620 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30082 from range 30072 - 32764. 2016-08-03 05:33:41,620 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30083 from range 30072 - 32764. 2016-08-03 05:33:41,623 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30084 from range 30072 - 32764. 2016-08-03 05:33:41,641 [myid:] - INFO [main:QuorumPeerTestBase$MainThread@131] - id = 0 tmpDir = /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test8166855371662584608.junit.dir clientPort = -1 adminServerPort = 8080 2016-08-03 05:33:41,647 [myid:] - INFO [main:QuorumPeerTestBase$MainThread@131] - id = 1 tmpDir = /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test5311610672905591522.junit.dir clientPort = -1 adminServerPort = 8080 2016-08-03 05:33:41,648 [myid:] - INFO [main:QuorumPeerTestBase$MainThread@131] - id = 2 tmpDir = /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test4754133370740902988.junit.dir clientPort = -1 adminServerPort = 8080 2016-08-03 05:33:41,651 [myid:] - INFO [Thread-1:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test5311610672905591522.junit.dir/zoo.cfg 2016-08-03 05:33:41,651 [myid:] - INFO [Thread-0:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test8166855371662584608.junit.dir/zoo.cfg 2016-08-03 05:33:41,651 [myid:] - INFO [Thread-2:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test4754133370740902988.junit.dir/zoo.cfg 2016-08-03 05:33:41,651 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:41,653 [myid:] - INFO [Thread-2:QuorumPeerConfig@308] - clientPort is not set 2016-08-03 05:33:41,653 [myid:] - INFO [Thread-1:QuorumPeerConfig@308] - clientPort is not set 2016-08-03 05:33:41,653 [myid:] - INFO [Thread-1:QuorumPeerConfig@332] - secureClientPortAddress is 0.0.0.0/0.0.0.0:30078 2016-08-03 05:33:41,653 [myid:] - INFO [Thread-0:QuorumPeerConfig@308] - clientPort is not set 2016-08-03 05:33:41,653 [myid:] - INFO [Thread-2:QuorumPeerConfig@332] - secureClientPortAddress is 0.0.0.0/0.0.0.0:30082 2016-08-03 05:33:41,653 [myid:] - INFO [Thread-0:QuorumPeerConfig@332] - secureClientPortAddress is 0.0.0.0/0.0.0.0:30074 2016-08-03 05:33:41,655 [myid:] - INFO [main:ClientBase@248] - server 127.0.0.1:30073 not up java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:99) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:69) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:241) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:232) at org.apache.zookeeper.test.SSLTest.testSecureQuorumServer(SSLTest.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-08-03 05:33:41,664 [myid:1] - INFO [Thread-1:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-08-03 05:33:41,664 [myid:1] - INFO [Thread-1:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-08-03 05:33:41,664 [myid:1] - INFO [Thread-1:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-08-03 05:33:41,664 [myid:2] - INFO [Thread-2:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-08-03 05:33:41,665 [myid:2] - INFO [Thread-2:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-08-03 05:33:41,665 [myid:2] - INFO [Thread-2:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-08-03 05:33:41,664 [myid:0] - INFO [Thread-0:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-08-03 05:33:41,665 [myid:0] - INFO [Thread-0:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-08-03 05:33:41,665 [myid:0] - INFO [Thread-0:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-08-03 05:33:41,665 [myid:1] - INFO [Thread-1:ManagedUtil@46] - Log4j found with jmx enabled. 2016-08-03 05:33:41,665 [myid:2] - INFO [Thread-2:ManagedUtil@46] - Log4j found with jmx enabled. 2016-08-03 05:33:41,665 [myid:0] - INFO [Thread-0:ManagedUtil@46] - Log4j found with jmx enabled. 2016-08-03 05:33:41,734 [myid:0] - ERROR [Thread-0:AppenderDynamicMBean@209] - Could not add DynamicLayoutMBean for [CONSOLE,layout=org.apache.log4j.PatternLayout]. javax.management.InstanceAlreadyExistsException: log4j:appender=CONSOLE,layout=org.apache.log4j.PatternLayout at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.AppenderDynamicMBean.registerLayoutMBean(AppenderDynamicMBean.java:203) at org.apache.log4j.jmx.AppenderDynamicMBean.preRegister(AppenderDynamicMBean.java:339) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.preRegister(DefaultMBeanServerInterceptor.java:1007) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:919) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.LoggerDynamicMBean.registerAppenderMBean(LoggerDynamicMBean.java:264) at org.apache.log4j.jmx.LoggerDynamicMBean.appenderMBeanRegistration(LoggerDynamicMBean.java:252) at org.apache.log4j.jmx.LoggerDynamicMBean.postRegister(LoggerDynamicMBean.java:280) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.HierarchyDynamicMBean.addLoggerMBean(HierarchyDynamicMBean.java:125) at org.apache.log4j.jmx.HierarchyDynamicMBean.postRegister(HierarchyDynamicMBean.java:263) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,735 [myid:1] - ERROR [Thread-1:HierarchyDynamicMBean@138] - Could not add loggerMBean for [root]. javax.management.InstanceAlreadyExistsException: log4j:logger=root at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.HierarchyDynamicMBean.addLoggerMBean(HierarchyDynamicMBean.java:125) at org.apache.log4j.jmx.HierarchyDynamicMBean.postRegister(HierarchyDynamicMBean.java:263) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,734 [myid:2] - ERROR [Thread-2:AppenderDynamicMBean@209] - Could not add DynamicLayoutMBean for [CONSOLE,layout=org.apache.log4j.PatternLayout]. javax.management.InstanceAlreadyExistsException: log4j:appender=CONSOLE,layout=org.apache.log4j.PatternLayout at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.AppenderDynamicMBean.registerLayoutMBean(AppenderDynamicMBean.java:203) at org.apache.log4j.jmx.AppenderDynamicMBean.preRegister(AppenderDynamicMBean.java:339) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.preRegister(DefaultMBeanServerInterceptor.java:1007) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:919) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.LoggerDynamicMBean.registerAppenderMBean(LoggerDynamicMBean.java:264) at org.apache.log4j.jmx.LoggerDynamicMBean.appenderMBeanRegistration(LoggerDynamicMBean.java:252) at org.apache.log4j.jmx.LoggerDynamicMBean.postRegister(LoggerDynamicMBean.java:280) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.HierarchyDynamicMBean.addLoggerMBean(HierarchyDynamicMBean.java:125) at org.apache.log4j.jmx.HierarchyDynamicMBean.postRegister(HierarchyDynamicMBean.java:263) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,737 [myid:1] - ERROR [Thread-1:ManagedUtil@114] - Problems while registering log4j jmx beans! javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,736 [myid:0] - ERROR [Thread-0:LoggerDynamicMBean@270] - Could not add appenderMBean for [CONSOLE]. javax.management.InstanceAlreadyExistsException: log4j:appender=CONSOLE at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.LoggerDynamicMBean.registerAppenderMBean(LoggerDynamicMBean.java:264) at org.apache.log4j.jmx.LoggerDynamicMBean.appenderMBeanRegistration(LoggerDynamicMBean.java:252) at org.apache.log4j.jmx.LoggerDynamicMBean.postRegister(LoggerDynamicMBean.java:280) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.HierarchyDynamicMBean.addLoggerMBean(HierarchyDynamicMBean.java:125) at org.apache.log4j.jmx.HierarchyDynamicMBean.postRegister(HierarchyDynamicMBean.java:263) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,739 [myid:1] - WARN [Thread-1:QuorumPeerMain@133] - Unable to register log4j JMX control javax.management.JMException: javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:115) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,738 [myid:2] - ERROR [Thread-2:LoggerDynamicMBean@270] - Could not add appenderMBean for [CONSOLE]. javax.management.InstanceAlreadyExistsException: log4j:appender=CONSOLE at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.LoggerDynamicMBean.registerAppenderMBean(LoggerDynamicMBean.java:264) at org.apache.log4j.jmx.LoggerDynamicMBean.appenderMBeanRegistration(LoggerDynamicMBean.java:252) at org.apache.log4j.jmx.LoggerDynamicMBean.postRegister(LoggerDynamicMBean.java:280) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.HierarchyDynamicMBean.addLoggerMBean(HierarchyDynamicMBean.java:125) at org.apache.log4j.jmx.HierarchyDynamicMBean.postRegister(HierarchyDynamicMBean.java:263) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,740 [myid:1] - INFO [Thread-1:QuorumPeerMain@136] - Starting quorum peer 2016-08-03 05:33:41,742 [myid:2] - ERROR [Thread-2:HierarchyDynamicMBean@138] - Could not add loggerMBean for [root]. javax.management.InstanceAlreadyExistsException: log4j:logger=root at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.log4j.jmx.AbstractDynamicMBean.registerMBean(AbstractDynamicMBean.java:160) at org.apache.log4j.jmx.HierarchyDynamicMBean.addLoggerMBean(HierarchyDynamicMBean.java:125) at org.apache.log4j.jmx.HierarchyDynamicMBean.postRegister(HierarchyDynamicMBean.java:263) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,743 [myid:2] - ERROR [Thread-2:ManagedUtil@114] - Problems while registering log4j jmx beans! javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,743 [myid:2] - WARN [Thread-2:QuorumPeerMain@133] - Unable to register log4j JMX control javax.management.JMException: javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:115) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:131) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:120) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:41,744 [myid:2] - INFO [Thread-2:QuorumPeerMain@136] - Starting quorum peer 2016-08-03 05:33:41,746 [myid:0] - INFO [Thread-0:QuorumPeerMain@136] - Starting quorum peer 2016-08-03 05:33:41,909 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:41,910 [myid:] - INFO [main:ClientBase@248] - server 127.0.0.1:30073 not up java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:99) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:69) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:241) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:232) at org.apache.zookeeper.test.SSLTest.testSecureQuorumServer(SSLTest.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-08-03 05:33:41,949 [myid:2] - INFO [Thread-2:QuorumPeer@1327] - Local sessions disabled 2016-08-03 05:33:41,949 [myid:1] - INFO [Thread-1:QuorumPeer@1327] - Local sessions disabled 2016-08-03 05:33:41,949 [myid:1] - INFO [Thread-1:QuorumPeer@1338] - Local session upgrading disabled 2016-08-03 05:33:41,949 [myid:0] - INFO [Thread-0:QuorumPeer@1327] - Local sessions disabled 2016-08-03 05:33:41,950 [myid:0] - INFO [Thread-0:QuorumPeer@1338] - Local session upgrading disabled 2016-08-03 05:33:41,949 [myid:1] - INFO [Thread-1:QuorumPeer@1305] - tickTime set to 4000 2016-08-03 05:33:41,950 [myid:1] - INFO [Thread-1:QuorumPeer@1349] - minSessionTimeout set to 8000 2016-08-03 05:33:41,950 [myid:1] - INFO [Thread-1:QuorumPeer@1360] - maxSessionTimeout set to 80000 2016-08-03 05:33:41,950 [myid:1] - INFO [Thread-1:QuorumPeer@1375] - initLimit set to 10 2016-08-03 05:33:41,949 [myid:2] - INFO [Thread-2:QuorumPeer@1338] - Local session upgrading disabled 2016-08-03 05:33:41,950 [myid:2] - INFO [Thread-2:QuorumPeer@1305] - tickTime set to 4000 2016-08-03 05:33:41,950 [myid:2] - INFO [Thread-2:QuorumPeer@1349] - minSessionTimeout set to 8000 2016-08-03 05:33:41,950 [myid:2] - INFO [Thread-2:QuorumPeer@1360] - maxSessionTimeout set to 80000 2016-08-03 05:33:41,950 [myid:2] - INFO [Thread-2:QuorumPeer@1375] - initLimit set to 10 2016-08-03 05:33:41,950 [myid:0] - INFO [Thread-0:QuorumPeer@1305] - tickTime set to 4000 2016-08-03 05:33:41,951 [myid:0] - INFO [Thread-0:QuorumPeer@1349] - minSessionTimeout set to 8000 2016-08-03 05:33:41,951 [myid:0] - INFO [Thread-0:QuorumPeer@1360] - maxSessionTimeout set to 80000 2016-08-03 05:33:41,951 [myid:0] - INFO [Thread-0:QuorumPeer@1375] - initLimit set to 10 2016-08-03 05:33:41,967 [myid:1] - INFO [Thread-1:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-03 05:33:41,968 [myid:2] - INFO [Thread-2:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-03 05:33:41,967 [myid:0] - INFO [Thread-0:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-03 05:33:42,161 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:42,162 [myid:] - INFO [main:ClientBase@248] - server 127.0.0.1:30073 not up java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:99) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:69) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:241) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:232) at org.apache.zookeeper.test.SSLTest.testSecureQuorumServer(SSLTest.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-08-03 05:33:42,413 [myid:0] - INFO [Thread-0:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-03 05:33:42,413 [myid:1] - INFO [Thread-1:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-03 05:33:42,413 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:42,413 [myid:2] - INFO [Thread-2:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-08-03 05:33:42,414 [myid:] - INFO [main:ClientBase@248] - server 127.0.0.1:30073 not up java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:99) at org.apache.zookeeper.client.FourLetterWordMain.send4LetterWord(FourLetterWordMain.java:69) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:241) at org.apache.zookeeper.test.ClientBase.waitForServerUp(ClientBase.java:232) at org.apache.zookeeper.test.SSLTest.testSecureQuorumServer(SSLTest.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-08-03 05:33:42,463 [myid:0] - INFO [Thread-0:NettyServerCnxnFactory@487] - binding to port localhost/127.0.0.1:30073 2016-08-03 05:33:42,463 [myid:2] - INFO [Thread-2:NettyServerCnxnFactory@487] - binding to port localhost/127.0.0.1:30081 2016-08-03 05:33:42,470 [myid:1] - INFO [Thread-1:NettyServerCnxnFactory@487] - binding to port localhost/127.0.0.1:30077 2016-08-03 05:33:42,488 [myid:1] - INFO [Thread-1:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:30078 2016-08-03 05:33:42,488 [myid:2] - INFO [Thread-2:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:30082 2016-08-03 05:33:42,488 [myid:0] - INFO [Thread-0:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:30074 2016-08-03 05:33:42,497 [myid:0] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: localhost/127.0.0.1:30076 2016-08-03 05:33:42,498 [myid:2] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: localhost/127.0.0.1:30084 2016-08-03 05:33:42,499 [myid:1] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: localhost/127.0.0.1:30080 2016-08-03 05:33:42,506 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):QuorumPeer@1033] - LOOKING 2016-08-03 05:33:42,506 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):QuorumPeer@1033] - LOOKING 2016-08-03 05:33:42,507 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):QuorumPeer@1033] - LOOKING 2016-08-03 05:33:42,508 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):FastLeaderElection@894] - New election. My id = 2, proposed zxid=0x0 2016-08-03 05:33:42,508 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):FastLeaderElection@894] - New election. My id = 1, proposed zxid=0x0 2016-08-03 05:33:42,508 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):FastLeaderElection@894] - New election. My id = 0, proposed zxid=0x0 2016-08-03 05:33:42,510 [myid:0] - INFO [localhost/127.0.0.1:30076:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51833 2016-08-03 05:33:42,510 [myid:1] - INFO [localhost/127.0.0.1:30080:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:59407 2016-08-03 05:33:42,511 [myid:0] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (1, 0) 2016-08-03 05:33:42,512 [myid:0] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 0) 2016-08-03 05:33:42,512 [myid:2] - INFO [localhost/127.0.0.1:30084:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:38675 2016-08-03 05:33:42,512 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,512 [myid:0] - INFO [localhost/127.0.0.1:30076:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51832 2016-08-03 05:33:42,513 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 0 my id = 2 2016-08-03 05:33:42,514 [myid:1] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 1) 2016-08-03 05:33:42,513 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@915] - Connection broken for id 0, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-03 05:33:42,514 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 2 my id = 0 error = java.net.SocketException: Broken pipe 2016-08-03 05:33:42,514 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,514 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@915] - Connection broken for id 2, my id = 0, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-03 05:33:42,515 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-03 05:33:42,514 [myid:1] - INFO [localhost/127.0.0.1:30080:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:59410 2016-08-03 05:33:42,515 [myid:2] - INFO [localhost/127.0.0.1:30084:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:38676 2016-08-03 05:33:42,515 [myid:0] - INFO [localhost/127.0.0.1:30076:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51839 2016-08-03 05:33:42,514 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 2 my id = 0 2016-08-03 05:33:42,514 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,516 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-08-03 05:33:42,514 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-03 05:33:42,516 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,516 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 2 2016-08-03 05:33:42,516 [myid:1] - INFO [localhost/127.0.0.1:30080:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:59413 2016-08-03 05:33:42,516 [myid:0] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 0) 2016-08-03 05:33:42,516 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 2 my id = 1 error = java.net.SocketException: Broken pipe 2016-08-03 05:33:42,516 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@915] - Connection broken for id 2, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-03 05:33:42,517 [myid:1] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-03 05:33:42,516 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@915] - Connection broken for id 1, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-03 05:33:42,518 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-03 05:33:42,515 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,515 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,517 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,517 [myid:1] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 2 my id = 1 2016-08-03 05:33:42,516 [myid:2] - INFO [localhost/127.0.0.1:30084:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:38679 2016-08-03 05:33:42,519 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,519 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,519 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@915] - Connection broken for id 0, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-03 05:33:42,519 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-03 05:33:42,519 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@915] - Connection broken for id 2, my id = 0, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-03 05:33:42,520 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-03 05:33:42,519 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-08-03 05:33:42,520 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 0 my id = 2 2016-08-03 05:33:42,520 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,520 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-08-03 05:33:42,521 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 2 my id = 0 2016-08-03 05:33:42,520 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,519 [myid:0] - INFO [localhost/127.0.0.1:30076:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51842 2016-08-03 05:33:42,519 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,521 [myid:0] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 0) 2016-08-03 05:33:42,521 [myid:2] - INFO [localhost/127.0.0.1:30084:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:38682 2016-08-03 05:33:42,522 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@915] - Connection broken for id 2, my id = 0, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-03 05:33:42,522 [myid:0] - INFO [localhost/127.0.0.1:30076:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51844 2016-08-03 05:33:42,522 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,522 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@915] - Connection broken for id 0, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-08-03 05:33:42,523 [myid:2] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-03 05:33:42,522 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-08-03 05:33:42,524 [myid:2] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 0 my id = 2 2016-08-03 05:33:42,524 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,523 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-08-03 05:33:42,523 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,523 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,522 [myid:0] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-08-03 05:33:42,524 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,524 [myid:0] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 2 my id = 0 2016-08-03 05:33:42,525 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,525 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,526 [myid:0] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,526 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-08-03 05:33:42,665 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:42,701 [myid:0] - INFO [New I/O worker #1:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:60036 2016-08-03 05:33:42,725 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1,name2=LeaderElection] 2016-08-03 05:33:42,726 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):QuorumPeer@1109] - FOLLOWING 2016-08-03 05:33:42,726 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id0,name1=replica.0,name2=LeaderElection] 2016-08-03 05:33:42,726 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=LeaderElection] 2016-08-03 05:33:42,727 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):QuorumPeer@1109] - FOLLOWING 2016-08-03 05:33:42,727 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):QuorumPeer@1121] - LEADING 2016-08-03 05:33:42,730 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):Leader@63] - TCP NoDelay set to: true 2016-08-03 05:33:42,730 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):Leader@83] - zookeeper.leader.maxConcurrentSnapshots = 10 2016-08-03 05:33:42,730 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):Leader@85] - zookeeper.leader.maxConcurrentSnapshotTimeout = 5 2016-08-03 05:33:42,731 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):Learner@88] - TCP NoDelay set to: true 2016-08-03 05:33:42,737 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:zookeeper.version=3.6.0-SNAPSHOT-1755017, built on 08/03/2016 05:24 GMT 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:host.name=asf909.gq1.ygridcore.net 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:java.version=1.7.0_80 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:java.vendor=Oracle Corporation 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:java.home=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7/jre 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:java.class.path=/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/classes:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/classes:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/apache-rat-core-0.10.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/apache-rat-tasks-0.10.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-compress-1.5.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-io-2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-lang-2.6.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jetty-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jetty-util-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/servlet-api-2.5-20081211.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/slf4j-log4j12-1.7.5.jar:/usr/local/asfpackages/ant/apache-ant-1.9.7/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:java.io.tmpdir=/tmp 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:java.compiler=<NA> 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:os.name=Linux 2016-08-03 05:33:42,738 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:os.arch=amd64 2016-08-03 05:33:42,739 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:os.version=3.13.0-36-lowlatency 2016-08-03 05:33:42,739 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:user.name=jenkins 2016-08-03 05:33:42,739 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:user.home=/home/jenkins 2016-08-03 05:33:42,739 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:user.dir=/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test 2016-08-03 05:33:42,739 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:os.memory.free=437MB 2016-08-03 05:33:42,739 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:os.memory.max=491MB 2016-08-03 05:33:42,739 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Environment@109] - Server environment:os.memory.total=491MB 2016-08-03 05:33:42,741 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):ZooKeeperServer@858] - minSessionTimeout set to 8000 2016-08-03 05:33:42,741 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):ZooKeeperServer@858] - minSessionTimeout set to 8000 2016-08-03 05:33:42,741 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):ZooKeeperServer@867] - maxSessionTimeout set to 80000 2016-08-03 05:33:42,741 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):ZooKeeperServer@858] - minSessionTimeout set to 8000 2016-08-03 05:33:42,741 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):ZooKeeperServer@867] - maxSessionTimeout set to 80000 2016-08-03 05:33:42,741 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):ZooKeeperServer@156] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test4754133370740902988.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test4754133370740902988.junit.dir/data/version-2 2016-08-03 05:33:42,741 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):ZooKeeperServer@867] - maxSessionTimeout set to 80000 2016-08-03 05:33:42,742 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):ZooKeeperServer@156] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test8166855371662584608.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test8166855371662584608.junit.dir/data/version-2 2016-08-03 05:33:42,741 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):ZooKeeperServer@156] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test5311610672905591522.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test5311610672905591522.junit.dir/data/version-2 2016-08-03 05:33:42,742 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Follower@66] - FOLLOWING - LEADER ELECTION TOOK - 15 MS 2016-08-03 05:33:42,742 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):Follower@66] - FOLLOWING - LEADER ELECTION TOOK - 16 MS 2016-08-03 05:33:42,743 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):Leader@412] - LEADING - LEADER ELECTION TOOK - 16 MS 2016-08-03 05:33:42,747 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test4754133370740902988.junit.dir/data/version-2/snapshot.0 2016-08-03 05:33:42,764 [myid:2] - INFO [LearnerHandler-/127.0.0.1:38806:LearnerHandler@382] - Follower sid: 1 : info : localhost:30079:30080:participant;localhost:30077 2016-08-03 05:33:42,764 [myid:2] - INFO [LearnerHandler-/127.0.0.1:38807:LearnerHandler@382] - Follower sid: 0 : info : localhost:30075:30076:participant;localhost:30073 2016-08-03 05:33:42,961 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:42,962 [myid:0] - INFO [New I/O worker #5:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:60040 2016-08-03 05:33:43,213 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:43,215 [myid:0] - INFO [New I/O worker #8:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:60043 2016-08-03 05:33:43,465 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:43,467 [myid:0] - INFO [New I/O worker #12:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:60044 2016-08-03 05:33:43,718 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:43,719 [myid:0] - INFO [New I/O worker #15:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:60047 2016-08-03 05:33:43,725 [myid:2] - INFO [LearnerHandler-/127.0.0.1:38806:LearnerHandler@683] - Synchronizing with Follower sid: 1 maxCommittedLog=0x0 minCommittedLog=0x0 lastProcessedZxid=0x0 peerLastZxid=0x0 2016-08-03 05:33:43,725 [myid:2] - INFO [LearnerHandler-/127.0.0.1:38806:LearnerHandler@727] - Sending DIFF zxid=0x0 for peer sid: 1 2016-08-03 05:33:43,727 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):Learner@366] - Getting a diff from the leader 0x0 2016-08-03 05:33:43,732 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):Learner@509] - Learner received NEWLEADER message 2016-08-03 05:33:43,783 [myid:2] - INFO [LearnerHandler-/127.0.0.1:38807:LearnerHandler@683] - Synchronizing with Follower sid: 0 maxCommittedLog=0x0 minCommittedLog=0x0 lastProcessedZxid=0x0 peerLastZxid=0x0 2016-08-03 05:33:43,783 [myid:2] - INFO [LearnerHandler-/127.0.0.1:38807:LearnerHandler@727] - Sending DIFF zxid=0x0 for peer sid: 0 2016-08-03 05:33:43,784 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Learner@366] - Getting a diff from the leader 0x0 2016-08-03 05:33:43,784 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Learner@509] - Learner received NEWLEADER message 2016-08-03 05:33:43,908 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test5311610672905591522.junit.dir/data/version-2/snapshot.0 2016-08-03 05:33:43,908 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test8166855371662584608.junit.dir/data/version-2/snapshot.0 2016-08-03 05:33:43,964 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):Leader@1249] - Have quorum of supporters, sids: [ [0, 2],[0, 2] ]; starting up and setting last processed zxid: 0x100000000 2016-08-03 05:33:43,970 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:43,971 [myid:0] - INFO [New I/O worker #17:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:60050 2016-08-03 05:33:44,222 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:44,223 [myid:0] - INFO [New I/O worker #20:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:60051 2016-08-03 05:33:44,474 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:44,475 [myid:0] - INFO [New I/O worker #23:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:60055 2016-08-03 05:33:44,726 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 30073 2016-08-03 05:33:44,728 [myid:0] - INFO [New I/O worker #26:NettyServerCnxn@275] - Processing stat command from /127.0.0.1:60056 2016-08-03 05:33:44,729 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@98] - TEST METHOD FAILED testSecureQuorumServer java.lang.AssertionError: waiting for server 0 being up at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.zookeeper.test.SSLTest.testSecureQuorumServer(SSLTest.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-08-03 05:33:44,731 [myid:] - INFO [main:ZKTestCase$1@70] - FAILED testSecureQuorumServer java.lang.AssertionError: waiting for server 0 being up at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.zookeeper.test.SSLTest.testSecureQuorumServer(SSLTest.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:535) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033) 2016-08-03 05:33:44,732 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testSecureQuorumServer 2016-08-03 05:33:44,736 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testSecureStandaloneServer 2016-08-03 05:33:44,737 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testSecureStandaloneServer 2016-08-03 05:33:44,737 [myid:] - INFO [main:PortAssignment@85] - Assigned port 30085 from range 30072 - 32764. 2016-08-03 05:33:44,737 [myid:] - INFO [main:QuorumPeerTestBase$MainThread@131] - id = -1 tmpDir = /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test1220380102294608498.junit.dir clientPort = -1 adminServerPort = 8080 2016-08-03 05:33:44,738 [myid:] - INFO [Thread-6:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test1220380102294608498.junit.dir/zoo.cfg 2016-08-03 05:33:44,738 [myid:] - INFO [Thread-6:QuorumPeerConfig@308] - clientPort is not set 2016-08-03 05:33:44,739 [myid:] - INFO [Thread-6:QuorumPeerConfig@332] - secureClientPortAddress is 0.0.0.0/0.0.0.0:30085 2016-08-03 05:33:44,739 [myid:-1] - INFO [Thread-6:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2016-08-03 05:33:44,739 [myid:-1] - INFO [Thread-6:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2016-08-03 05:33:44,739 [myid:-1] - INFO [Thread-6:DatadirCleanupManager@101] - Purge task is not scheduled. 2016-08-03 05:33:44,740 [myid:-1] - WARN [Thread-6:QuorumPeerMain@122] - Either no config or no quorum defined in config, running in standalone mode 2016-08-03 05:33:44,741 [myid:-1] - INFO [Thread-6:ManagedUtil@46] - Log4j found with jmx enabled. 2016-08-03 05:33:44,741 [myid:-1] - ERROR [Thread-6:ManagedUtil@114] - Problems while registering log4j jmx beans! javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:75) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:91) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:61) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:125) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:44,742 [myid:-1] - WARN [Thread-6:ZooKeeperServerMain@93] - Unable to register log4j JMX control javax.management.JMException: javax.management.InstanceAlreadyExistsException: log4j:hiearchy=default at org.apache.zookeeper.jmx.ManagedUtil.registerLog4jMBeans(ManagedUtil.java:115) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:91) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:61) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:125) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.run(QuorumPeerTestBase.java:245) at java.lang.Thread.run(Thread.java:745) 2016-08-03 05:33:44,743 [myid:-1] - INFO [Thread-6:QuorumPeerConfig@116] - Reading configuration from: /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test1220380102294608498.junit.dir/zoo.cfg 2016-08-03 05:33:44,743 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.6.0-SNAPSHOT-1755017, built on 08/03/2016 05:24 GMT 2016-08-03 05:33:44,743 [myid:] - INFO [main:Environment@109] - Client environment:host.name=asf909.gq1.ygridcore.net 2016-08-03 05:33:44,744 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.7.0_80 2016-08-03 05:33:44,744 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation 2016-08-03 05:33:44,744 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7/jre 2016-08-03 05:33:44,743 [myid:-1] - INFO [Thread-6:QuorumPeerConfig@308] - clientPort is not set 2016-08-03 05:33:44,744 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/classes:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/classes:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/apache-rat-core-0.10.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/apache-rat-tasks-0.10.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-compress-1.5.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-io-2.2.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/commons-lang-2.6.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jetty-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jetty-util-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/servlet-api-2.5-20081211.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/lib/slf4j-log4j12-1.7.5.jar:/usr/local/asfpackages/ant/apache-ant-1.9.7/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-08-03 05:33:44,745 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-08-03 05:33:44,745 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp 2016-08-03 05:33:44,745 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA> 2016-08-03 05:33:44,745 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux 2016-08-03 05:33:44,745 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64 2016-08-03 05:33:44,744 [myid:-1] - INFO [Thread-6:QuorumPeerConfig@332] - secureClientPortAddress is 0.0.0.0/0.0.0.0:30085 2016-08-03 05:33:44,745 [myid:] - INFO [main:Environment@109] - Client environment:os.version=3.13.0-36-lowlatency 2016-08-03 05:33:44,746 [myid:] - INFO [main:Environment@109] - Client environment:user.name=jenkins 2016-08-03 05:33:44,746 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/home/jenkins 2016-08-03 05:33:44,746 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test 2016-08-03 05:33:44,746 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=424MB 2016-08-03 05:33:44,747 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=491MB 2016-08-03 05:33:44,747 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=491MB 2016-08-03 05:33:44,746 [myid:-1] - INFO [Thread-6:ZooKeeperServerMain@113] - Starting server 2016-08-03 05:33:44,747 [myid:-1] - INFO [Thread-6:ZooKeeperServer@858] - minSessionTimeout set to 8000 2016-08-03 05:33:44,748 [myid:-1] - INFO [Thread-6:ZooKeeperServer@867] - maxSessionTimeout set to 80000 2016-08-03 05:33:44,748 [myid:-1] - INFO [Thread-6:ZooKeeperServer@156] - Created server with tickTime 4000 minSessionTimeout 8000 maxSessionTimeout 80000 datadir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test1220380102294608498.junit.dir/data/version-2 snapdir /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test1220380102294608498.junit.dir/data/version-2 2016-08-03 05:33:44,749 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=127.0.0.1:30085 sessionTimeout=3000 watcher=org.apache.zookeeper.test.SSLTest$2@10614f3d 2016-08-03 05:33:44,760 [myid:-1] - INFO [Thread-6:NettyServerCnxnFactory@487] - binding to port 0.0.0.0/0.0.0.0:30085 2016-08-03 05:33:44,761 [myid:-1] - INFO [Thread-6:FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/build/test/tmp/test1220380102294608498.junit.dir/data/version-2/snapshot.0 2016-08-03 05:33:44,778 [myid:-1] - INFO [Thread-6:ContainerManager@64] - Using checkIntervalMs=60000 maxPerMinute=10000 2016-08-03 05:33:44,783 [myid:127.0.0.1:30085] - INFO [main-SendThread(127.0.0.1:30085):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:30085. Will not attempt to authenticate using SASL (unknown error) 2016-08-03 05:33:45,065 [myid:127.0.0.1:30085] - INFO [main-SendThread(127.0.0.1:30085):ClientCnxnSocketNetty$ZKClientPipelineFactory@370] - SSL handler added for channel: null 2016-08-03 05:33:45,073 [myid:] - INFO [New I/O worker #221:ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:57010, server: 127.0.0.1/127.0.0.1:30085 2016-08-03 05:33:45,076 [myid:] - INFO [New I/O worker #221:ClientCnxnSocketNetty$1@153] - channel is connected: [id: 0x073f691d, /127.0.0.1:57010 => 127.0.0.1/127.0.0.1:30085] 2016-08-03 05:33:45,079 [myid:-1] - INFO [New I/O server boss #242:NettyServerCnxnFactory@384] - SSL handler added for channel: null 2016-08-03 05:33:45,214 [myid:-1] - INFO [New I/O worker #199:X509AuthenticationProvider@157] - Authenticated Id 'CN=localhost,OU=ZooKeeper,O=Apache,L=Unknown,ST=Unknown,C=Unknown' for Scheme 'x509' 2016-08-03 05:33:45,215 [myid:-1] - INFO [New I/O worker #199:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:57010 2016-08-03 05:33:45,218 [myid:-1] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-08-03 05:33:45,256 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):CommitProcessor@318] - Configuring CommitProcessor with 16 worker threads. 2016-08-03 05:33:45,260 [myid:2] - INFO [QuorumPeer[myid=2](plain=localhost/127.0.0.1:30081)(secure=0.0.0.0/0.0.0.0:30082):ContainerManager@64] - Using checkIntervalMs=60000 maxPerMinute=10000 2016-08-03 05:33:45,261 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):Learner@493] - Learner received UPTODATE message 2016-08-03 05:33:45,261 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):Learner@493] - Learner received UPTODATE message 2016-08-03 05:33:45,349 [myid:-1] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x101a7cee3fd0000 with negotiated timeout 8000 for client /127.0.0.1:57010 2016-08-03 05:33:45,350 [myid:] - INFO [New I/O worker #221:ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:30085, sessionid = 0x101a7cee3fd0000, negotiated timeout = 8000 2016-08-03 05:33:45,421 [myid:1] - INFO [QuorumPeer[myid=1](plain=localhost/127.0.0.1:30077)(secure=0.0.0.0/0.0.0.0:30078):CommitProcessor@318] - Configuring CommitProcessor with 16 worker threads. 2016-08-03 05:33:45,421 [myid:0] - INFO [QuorumPeer[myid=0](plain=localhost/127.0.0.1:30073)(secure=0.0.0.0/0.0.0.0:30074):CommitProcessor@318] - Configuring CommitProcessor with 16 worker threads. 2016-08-03 05:33:45,459 [myid:-1] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@647] - Processed session termination for sessionid: 0x101a7cee3fd0000 2016-08-03 05:33:45,503 [myid:] - INFO [main:ClientCnxnSocketNetty@208] - channel is told closing 2016-08-03 05:33:45,503 [myid:] - INFO [main:ZooKeeper@1313] - Session: 0x101a7cee3fd0000 closed 2016-08-03 05:33:45,503 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x101a7cee3fd0000 2016-08-03 05:33:45,503 [myid:] - INFO [New I/O worker #221:ClientCnxnSocketNetty$ZKClientHandler@384] - channel is disconnected: [id: 0x073f691d, /127.0.0.1:57010 :> 127.0.0.1/127.0.0.1:30085] 2016-08-03 05:33:45,504 [myid:] - INFO [New I/O worker #221:ClientCnxnSocketNetty@208] - channel is told closing 2016-08-03 05:33:45,510 [myid:-1] - INFO [SyncThread:0:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=Connections,name2=127.0.0.1,name3=0x101a7cee3fd0000] 2016-08-03 05:33:46,003 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 92699 2016-08-03 05:33:46,004 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 283 2016-08-03 05:33:46,004 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testSecureStandaloneServer 2016-08-03 05:33:46,004 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testSecureStandaloneServer 2016-08-03 05:33:46,004 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testSecureStandaloneServer {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 21 weeks ago | 0|i31u67: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2498 | Potential resource leak in C client when processing unexpected / out of order response |
Bug | Closed | Major | Fixed | Michael Han | Michael Han | Michael Han | 03/Aug/16 01:01 | 04/Sep/16 01:27 | 03/Aug/16 14:22 | 3.4.8, 3.5.2 | 3.4.9, 3.5.3 | c client | 0 | 4 | In C client, we use reference counting to decide if a given zh handle can be destroyed or not. This requires we always make sure to call api_prolog (which increment the counter) and api_epilog (which decrease the counter) in pairs, for a given call context. In zookeeper_process, there is a place where the code will return without invoking api_epilog, which would lead to potential zh resource leak. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 33 weeks, 1 day ago | 0|i31u5b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2497 | ZOOKEEPER-3170 Flaky Test: org.apache.zookeeper.test.QuorumTest.testMultipleWatcherObjs |
Sub-task | Closed | Major | Cannot Reproduce | Andor Molnar | Michael Han | Michael Han | 02/Aug/16 11:27 | 19/Dec/19 18:02 | 25/Oct/18 11:06 | 3.5.2 | 3.5.5 | quorum, tests | 0 | 2 | ZOOKEEPER-2135 | Example: https://builds.apache.org/job/ZooKeeper-trunk-jdk8/607/ https://builds.apache.org/job/ZooKeeper_branch35_jdk8/127/ Note I haven't found any of the Jenkins JDK7 build fail with same error message, so not sure if this is JDK8 specific. {noformat} 1 tests failed. FAILED: org.apache.zookeeper.test.QuorumTest.testMultipleWatcherObjs Error Message: Timeout occurred. Please note the time in the report does not reflect the time until the timeout. Stack Trace: junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout. at java.lang.Thread.run(Thread.java:745) {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 21 weeks ago | 0|i31t3r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2496 | When inside a transaction, some exceptions do not have path information set. |
Bug | Open | Major | Unresolved | Unassigned | Kazuaki Banzai | Kazuaki Banzai | 01/Aug/16 03:52 | 27/Sep/16 08:24 | 3.4.8, 3.5.1 | 1 | 6 | If a client tries to execute some illegal operations inside a transaction, ZooKeeper throws an exception. Some exceptions such as NodeExistsException should have a path to indicate where the exception occurred. ZooKeeper clients can get the path by calling method getPath. However, this method returns null if the exception occurs inside a transaction. For example, when a client calls create /a and create /a in a transaction, ZooKeeper throws NodeExistsException but getPath returns null. In normal operation (outside transactions), the path information is set correctly. The patch only shows this bug occurs with NoNode exception and NodeExists exception, but this bug seems to occur with any exception which needs a path information: When an error occurred in a transaction, ZooKeeper creates an ErrorResult instance to represent error result. However, the ErrorResult class doesn't have a field for a path where an error occurred(See src/java/main/org/apache/zookeeper/OpResult.java for more details). |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 25 weeks, 2 days ago | 0|i31qfr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2495 | Cluster unavailable on disk full(ENOSPC), disk quota(EDQUOT), disk write error(EIO) errors |
Bug | Open | Major | Unresolved | Unassigned | Ramnatthan Alagappan | Ramnatthan Alagappan | 31/Jul/16 16:51 | 31/Jul/16 21:22 | 3.4.8 | leaderElection, server | 0 | 4 | Normal ZooKeeper cluster with 3 Linux nodes. | ZooKeeper cluster completely stalls with *no* transactions making progress when a storage related error (such as *ENOSPC, EDQUOT, EIO*) is encountered by the current *leader*. Surprisingly, the same errors in some circumstances cause the node to completely crash and therefore allowing other nodes in the cluster to become the leader and make progress with transactions. Interestingly, the same errors if encountered while initializing a new log file causes the current leader to go to weird state (but does not crash) where it thinks it is the leader (and so does not allow others to become the leader). *This causes the entire cluster to freeze. * Here is the stacktrace of the leader: ------------------------------------------------ 2016-07-11 15:42:27,502 [myid:3] - INFO [SyncThread:3:FileTxnLog@199] - Creating new log file: log.200000001 2016-07-11 15:42:27,505 [myid:3] - ERROR [SyncThread:3:ZooKeeperCriticalThread@49] - Severe unrecoverable error, from thread : SyncThread:3 java.io.IOException: Disk quota exceeded at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:345) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at org.apache.zookeeper.server.persistence.FileTxnLog.append(FileTxnLog.java:211) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.append(FileTxnSnapLog.java:314) at org.apache.zookeeper.server.ZKDatabase.append(ZKDatabase.java:476) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:140) ------------------------------------------------ From the trace and the code, it looks like the problem happens only when a new log file is initialized and only when there are errors in two cases: 1. Error during the append of *log header*. 2. Error during *padding zero bytes to the end of the log*. If similar errors happen when writing some other blocks of data, then the node just completely crashes allowing others to be elected as a new leader. These two blocks of the newly created log file are special as they take a different error recovery code path -- the node does not completely crash but rather certain threads are killed but supposedly the quorum holding thread stays up thereby preventing others to become the new leader. This causes the other nodes to think that there is no problem with the leader but the cluster just becomes unavailable for any subsequent operations such as read/write. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 33 weeks, 3 days ago | 0|i31q2n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2494 | reword documentation to say "a simple shell to execute file-like operations" instead of "a shell in which to execute simple file-system-like operations" |
Bug | Open | Major | Unresolved | Abraham Fine | Abraham Fine | Abraham Fine | 28/Jul/16 15:35 | 05/Feb/20 07:15 | 3.4.8, 3.5.2 | 3.7.0, 3.5.8 | documentation | 0 | 2 | ZOOKEEPER-2477 | [~rakeshr] mades this suggestion in ZOOKEEPER-2477 and I thought this was worth implementing in its own jira | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 50 weeks ago | 0|i31n5j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2493 | ZOOKEEPER-3170 Flaky Tests: ReconfigTest fails on Solaris |
Sub-task | Open | Major | Unresolved | Unassigned | Michael Han | Michael Han | 28/Jul/16 13:34 | 05/Feb/20 07:17 | 3.5.0, 3.5.1, 3.5.2 | 3.7.0, 3.5.8 | tests | 0 | 1 | ZOOKEEPER-2135 | Solaris | Two tests related to ReconfigTest fail consistently on Solaris: org.apache.zookeeper.test.ReconfigTest.testPortChangeToBlockedPortLeaderorg.apache.zookeeper.test.ReconfigTest.testPortChangeToBlockedPortFollower Examples: https://builds.apache.org/view/S-Z/view/ZooKeeper/job/ZooKeeper-trunk-solaris/1245/#showFailuresLink https://builds.apache.org/view/S-Z/view/ZooKeeper/job/ZooKeeper_branch35_solaris/185/#showFailuresLink {noformat} org.apache.zookeeper.test.ReconfigTest.testPortChangeToBlockedPortFollower Failing for the past 5 builds (Since Failed#1241 ) Took 1 min 37 sec. Error Message client could not connect to reestablished quorum: giving up after 30+ seconds. Stacktrace junit.framework.AssertionFailedError: client could not connect to reestablished quorum: giving up after 30+ seconds. at org.apache.zookeeper.test.ReconfigTest.testNormalOperation(ReconfigTest.java:173) at org.apache.zookeeper.test.ReconfigTest.testPortChangeToBlockedPort(ReconfigTest.java:732) at org.apache.zookeeper.test.ReconfigTest.testPortChangeToBlockedPortFollower(ReconfigTest.java:658) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) Standard Output 2016-07-28 09:24:37,121 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-07-28 09:24:37,185 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-07-28 09:24:37,200 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testQuorumSystemChange 2016-07-28 09:24:37,202 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testQuorumSystemChange 2016-07-28 09:24:37,577 [myid:] - INFO [main:PortAssignment@157] - Single test process using ports from 11221 - 32767. 2016-07-28 09:24:37,577 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11222 from range 11221 - 32767. 2016-07-28 09:24:37,579 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11223 from range 11221 - 32767. 2016-07-28 09:24:37,579 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11224 from range 11221 - 32767. 2016-07-28 09:24:37,580 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11225 from range 11221 - 32767. 2016-07-28 09:24:37,581 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11226 from range 11221 - 32767. 2016-07-28 09:24:37,581 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11227 from range 11221 - 32767. 2016-07-28 09:24:37,581 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11228 from range 11221 - 32767. 2016-07-28 09:24:37,582 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11229 from range 11221 - 32767. 2016-07-28 09:24:37,582 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11230 from range 11221 - 32767. 2016-07-28 09:24:37,582 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11231 from range 11221 - 32767. 2016-07-28 09:24:37,582 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11232 from range 11221 - 32767. 2016-07-28 09:24:37,584 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11233 from range 11221 - 32767. 2016-07-28 09:24:37,585 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11234 from range 11221 - 32767. 2016-07-28 09:24:37,585 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11235 from range 11221 - 32767. 2016-07-28 09:24:37,585 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11236 from range 11221 - 32767. 2016-07-28 09:24:37,585 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11237 from range 11221 - 32767. 2016-07-28 09:24:37,586 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11238 from range 11221 - 32767. 2016-07-28 09:24:37,586 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11239 from range 11221 - 32767. 2016-07-28 09:24:37,586 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11240 from range 11221 - 32767. 2016-07-28 09:24:37,587 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11241 from range 11221 - 32767. 2016-07-28 09:24:37,587 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11242 from range 11221 - 32767. 2016-07-28 09:24:37,587 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 1; public port 11222 2016-07-28 09:24:37,609 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,615 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11222 2016-07-28 09:24:37,638 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 2; public port 11225 2016-07-28 09:24:37,638 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,639 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11225 2016-07-28 09:24:37,639 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 3; public port 11228 2016-07-28 09:24:37,640 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,640 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11228 2016-07-28 09:24:37,641 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 4; public port 11231 2016-07-28 09:24:37,641 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,641 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11231 2016-07-28 09:24:37,642 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 5; public port 11234 2016-07-28 09:24:37,642 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,642 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11234 2016-07-28 09:24:37,643 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 6; public port 11237 2016-07-28 09:24:37,643 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,644 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11237 2016-07-28 09:24:37,644 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 7; public port 11240 2016-07-28 09:24:37,644 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,645 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11240 2016-07-28 09:24:37,646 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,646 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,646 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,646 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,649 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,649 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,651 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-28 09:24:37,651 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11222 is no longer accepting client connections 2016-07-28 09:24:37,651 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11225 2016-07-28 09:24:37,652 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11225 is no longer accepting client connections 2016-07-28 09:24:37,652 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11228 2016-07-28 09:24:37,652 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11228 is no longer accepting client connections 2016-07-28 09:24:37,652 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11231 2016-07-28 09:24:37,653 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11231 is no longer accepting client connections 2016-07-28 09:24:37,653 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11234 2016-07-28 09:24:37,653 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11234 is no longer accepting client connections 2016-07-28 09:24:37,653 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11237 2016-07-28 09:24:37,653 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11237 is no longer accepting client connections 2016-07-28 09:24:37,654 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11240 2016-07-28 09:24:37,654 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11240 is no longer accepting client connections 2016-07-28 09:24:37,654 [myid:] - INFO [main:QuorumUtil@203] - Creating QuorumPeer 1; public port 11222 2016-07-28 09:24:37,654 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,655 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11222 2016-07-28 09:24:37,660 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,708 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,754 [myid:] - INFO [main:QuorumUtil@146] - Started QuorumPeer 1 2016-07-28 09:24:37,754 [myid:] - INFO [main:QuorumUtil@203] - Creating QuorumPeer 2; public port 11225 2016-07-28 09:24:37,755 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,755 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11225 2016-07-28 09:24:37,756 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,759 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11224 2016-07-28 09:24:37,771 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,780 [myid:] - INFO [main:QuorumUtil@146] - Started QuorumPeer 2 2016-07-28 09:24:37,780 [myid:] - INFO [main:QuorumUtil@203] - Creating QuorumPeer 3; public port 11228 2016-07-28 09:24:37,781 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,781 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11228 2016-07-28 09:24:37,781 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11227 2016-07-28 09:24:37,888 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,894 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11222)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-07-28 09:24:37,894 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11225)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-07-28 09:24:37,895 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11225)(secure=disabled):FastLeaderElection@894] - New election. My id = 2, proposed zxid=0x0 2016-07-28 09:24:37,895 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11222)(secure=disabled):FastLeaderElection@894] - New election. My id = 1, proposed zxid=0x0 2016-07-28 09:24:37,899 [myid:] - INFO [/127.0.0.1:11227:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:50183 2016-07-28 09:24:37,899 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 1) 2016-07-28 09:24:37,901 [myid:] - INFO [/127.0.0.1:11224:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:50185 2016-07-28 09:24:37,901 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 3 at election address /127.0.0.1:11230 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,905 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,906 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 3 at election address /127.0.0.1:11230 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,906 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,905 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,907 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,908 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,920 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,961 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,908 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,962 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,962 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,963 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,963 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,965 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,967 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,968 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,968 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 3 at election address /127.0.0.1:11230 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,969 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,969 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,981 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,982 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,015 [myid:] - INFO [main:QuorumUtil@146] - Started QuorumPeer 3 2016-07-28 09:24:38,016 [myid:] - INFO [main:QuorumUtil@203] - Creating QuorumPeer 4; public port 11231 2016-07-28 09:24:38,016 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:38,017 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11231 2016-07-28 09:24:38,017 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11228)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-07-28 09:24:38,018 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11228)(secure=disabled):FastLeaderElection@894] - New election. My id = 3, proposed zxid=0x0 2016-07-28 09:24:38,018 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:38,020 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11230 2016-07-28 09:24:38,025 [myid:] - INFO [/127.0.0.1:11224:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:50200 2016-07-28 09:24:38,026 [myid:] - INFO [/127.0.0.1:11227:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:50201 2016-07-28 09:24:38,027 [myid:] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,027 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,028 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,029 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,029 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,031 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,031 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,031 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,032 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,032 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,033 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,033 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,032 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,033 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,034 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,035 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,036 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,037 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,037 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,038 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,044 [myid:] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,045 [myid:] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,045 [myid:] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) ...[truncated 3451538 chars]... mPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1] 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1] 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.2] 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.3] 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.5] 2016-07-28 09:30:21,332 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled) 2016-07-28 09:30:21,332 [myid:] - INFO [main:Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1184) at org.apache.zookeeper.test.QuorumUtil.shutdown(QuorumUtil.java:251) at org.apache.zookeeper.test.QuorumUtil.shutdownAll(QuorumUtil.java:238) at org.apache.zookeeper.test.QuorumUtil.tearDown(QuorumUtil.java:306) at org.apache.zookeeper.test.ReconfigTest.tearDown(ReconfigTest.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-28 09:30:21,333 [myid:] - INFO [main:LearnerZooKeeperServer@165] - Shutting down 2016-07-28 09:30:21,333 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=Follower] 2016-07-28 09:30:21,333 [myid:] - INFO [main:FollowerRequestProcessor@138] - Shutting down 2016-07-28 09:30:21,333 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1115) 2016-07-28 09:30:21,333 [myid:] - INFO [FollowerRequestProcessor:2:FollowerRequestProcessor@109] - FollowerRequestProcessor exited loop! 2016-07-28 09:30:21,333 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-07-28 09:30:21,334 [myid:] - INFO [CommitProcessor:2:CommitProcessor@299] - CommitProcessor exited loop! 2016-07-28 09:30:21,334 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-28 09:30:21,334 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=Follower,name3=InMemoryDataTree] 2016-07-28 09:30:21,334 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-28 09:30:21,334 [myid:] - INFO [SyncThread:2:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-28 09:30:21,334 [myid:] - WARN [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-07-28 09:30:21,335 [myid:] - WARN [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-07-28 09:30:21,335 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2] 2016-07-28 09:30:21,335 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-28 09:30:21,335 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2] 2016-07-28 09:30:21,336 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11374:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-28 09:30:21,337 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.1] 2016-07-28 09:30:21,337 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.3] 2016-07-28 09:30:21,337 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:21,337 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.5] 2016-07-28 09:30:21,337 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:21,337 [myid:] - INFO [/127.0.0.1:11376:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-28 09:30:21,338 [myid:] - INFO [main:QuorumUtil@254] - Shutting down leader election QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled) 2016-07-28 09:30:21,339 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled) to exit thread 2016-07-28 09:30:21,339 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled) 2016-07-28 09:30:21,339 [myid:] - INFO [main:Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1184) at org.apache.zookeeper.test.QuorumUtil.shutdown(QuorumUtil.java:251) at org.apache.zookeeper.test.QuorumUtil.shutdownAll(QuorumUtil.java:238) at org.apache.zookeeper.test.QuorumUtil.tearDown(QuorumUtil.java:306) at org.apache.zookeeper.test.ReconfigTest.tearDown(ReconfigTest.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-28 09:30:21,339 [myid:] - INFO [main:LearnerZooKeeperServer@165] - Shutting down 2016-07-28 09:30:21,339 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-28 09:30:21,339 [myid:] - INFO [main:FollowerRequestProcessor@138] - Shutting down 2016-07-28 09:30:21,339 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-07-28 09:30:21,339 [myid:] - INFO [FollowerRequestProcessor:3:FollowerRequestProcessor@109] - FollowerRequestProcessor exited loop! 2016-07-28 09:30:21,339 [myid:] - INFO [CommitProcessor:3:CommitProcessor@299] - CommitProcessor exited loop! 2016-07-28 09:30:21,339 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-28 09:30:21,340 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3,name2=Follower,name3=InMemoryDataTree] 2016-07-28 09:30:21,340 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-28 09:30:21,340 [myid:] - INFO [SyncThread:3:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-28 09:30:21,340 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-28 09:30:21,341 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:21,341 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11377:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-28 09:30:21,342 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:21,342 [myid:] - INFO [/127.0.0.1:11379:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-28 09:30:21,343 [myid:] - INFO [main:QuorumUtil@254] - Shutting down leader election QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled) 2016-07-28 09:30:21,343 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled) to exit thread 2016-07-28 09:30:21,421 [myid:127.0.0.1:11273] - INFO [main-SendThread(127.0.0.1:11273):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11273. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:21,422 [myid:127.0.0.1:11273] - ERROR [main-SendThread(127.0.0.1:11273):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11273 2016-07-28 09:30:21,422 [myid:127.0.0.1:11273] - WARN [main-SendThread(127.0.0.1:11273):ClientCnxn$SendThread@1235] - Session 0x222add090c60000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:21,451 [myid:127.0.0.1:11257] - INFO [main-SendThread(127.0.0.1:11257):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11257. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:21,451 [myid:127.0.0.1:11257] - ERROR [main-SendThread(127.0.0.1:11257):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11257 2016-07-28 09:30:21,451 [myid:127.0.0.1:11257] - WARN [main-SendThread(127.0.0.1:11257):ClientCnxn$SendThread@1235] - Session 0x122adcf0ae70000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:21,671 [myid:127.0.0.1:11276] - INFO [main-SendThread(127.0.0.1:11276):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11276. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:21,671 [myid:127.0.0.1:11276] - ERROR [main-SendThread(127.0.0.1:11276):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11276 2016-07-28 09:30:21,671 [myid:127.0.0.1:11276] - WARN [main-SendThread(127.0.0.1:11276):ClientCnxn$SendThread@1235] - Session 0x322add08d0d0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:21,991 [myid:127.0.0.1:11306] - INFO [main-SendThread(127.0.0.1:11306):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11306. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:21,991 [myid:127.0.0.1:11306] - ERROR [main-SendThread(127.0.0.1:11306):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11306 2016-07-28 09:30:21,991 [myid:127.0.0.1:11306] - WARN [main-SendThread(127.0.0.1:11306):ClientCnxn$SendThread@1235] - Session 0x222add109d00000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,091 [myid:127.0.0.1:11263] - INFO [main-SendThread(127.0.0.1:11263):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11263. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,091 [myid:127.0.0.1:11263] - ERROR [main-SendThread(127.0.0.1:11263):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11263 2016-07-28 09:30:22,091 [myid:127.0.0.1:11263] - WARN [main-SendThread(127.0.0.1:11263):ClientCnxn$SendThread@1235] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,111 [myid:127.0.0.1:11303] - INFO [main-SendThread(127.0.0.1:11303):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11303. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,111 [myid:127.0.0.1:11303] - ERROR [main-SendThread(127.0.0.1:11303):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11303 2016-07-28 09:30:22,111 [myid:127.0.0.1:11303] - WARN [main-SendThread(127.0.0.1:11303):ClientCnxn$SendThread@1235] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,241 [myid:127.0.0.1:11270] - INFO [main-SendThread(127.0.0.1:11270):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11270. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,241 [myid:127.0.0.1:11270] - ERROR [main-SendThread(127.0.0.1:11270):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11270 2016-07-28 09:30:22,241 [myid:127.0.0.1:11270] - WARN [main-SendThread(127.0.0.1:11270):ClientCnxn$SendThread@1235] - Session 0x122add08ccf0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,341 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3,name2=Follower] 2016-07-28 09:30:22,341 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1115) 2016-07-28 09:30:22,341 [myid:] - WARN [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-07-28 09:30:22,341 [myid:] - WARN [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-07-28 09:30:22,341 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3] 2016-07-28 09:30:22,342 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3] 2016-07-28 09:30:22,342 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.1] 2016-07-28 09:30:22,342 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.2] 2016-07-28 09:30:22,342 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.5] 2016-07-28 09:30:22,342 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled) 2016-07-28 09:30:22,342 [myid:] - INFO [main:Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1184) at org.apache.zookeeper.test.QuorumUtil.shutdown(QuorumUtil.java:251) at org.apache.zookeeper.test.QuorumUtil.shutdownAll(QuorumUtil.java:238) at org.apache.zookeeper.test.QuorumUtil.tearDown(QuorumUtil.java:306) at org.apache.zookeeper.test.ReconfigTest.tearDown(ReconfigTest.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-28 09:30:22,342 [myid:] - INFO [main:LearnerZooKeeperServer@165] - Shutting down 2016-07-28 09:30:22,342 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-28 09:30:22,343 [myid:] - INFO [main:FollowerRequestProcessor@138] - Shutting down 2016-07-28 09:30:22,343 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-07-28 09:30:22,343 [myid:] - INFO [FollowerRequestProcessor:4:FollowerRequestProcessor@109] - FollowerRequestProcessor exited loop! 2016-07-28 09:30:22,343 [myid:] - INFO [CommitProcessor:4:CommitProcessor@299] - CommitProcessor exited loop! 2016-07-28 09:30:22,343 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-28 09:30:22,344 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.4,name2=Follower,name3=InMemoryDataTree] 2016-07-28 09:30:22,344 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-28 09:30:22,344 [myid:] - INFO [SyncThread:4:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-28 09:30:22,344 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-28 09:30:22,345 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11380:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-28 09:30:22,345 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:22,345 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:22,346 [myid:] - INFO [/127.0.0.1:11382:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-28 09:30:22,346 [myid:] - INFO [main:QuorumUtil@254] - Shutting down leader election QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled) 2016-07-28 09:30:22,346 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled) to exit thread 2016-07-28 09:30:22,521 [myid:127.0.0.1:11309] - INFO [main-SendThread(127.0.0.1:11309):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11309. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,521 [myid:127.0.0.1:11309] - ERROR [main-SendThread(127.0.0.1:11309):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11309 2016-07-28 09:30:22,521 [myid:127.0.0.1:11309] - WARN [main-SendThread(127.0.0.1:11309):ClientCnxn$SendThread@1235] - Session 0x322add109d50000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,631 [myid:127.0.0.1:11273] - INFO [main-SendThread(127.0.0.1:11273):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11273. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,631 [myid:127.0.0.1:11273] - ERROR [main-SendThread(127.0.0.1:11273):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11273 2016-07-28 09:30:22,631 [myid:127.0.0.1:11273] - WARN [main-SendThread(127.0.0.1:11273):ClientCnxn$SendThread@1235] - Session 0x222add090c60000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,761 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-28 09:30:22,761 [myid:] - INFO [WorkerSender[myid=4]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerSender[myid=1]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerSender[myid=3]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-28 09:30:23,061 [myid:127.0.0.1:11260] - INFO [main-SendThread(127.0.0.1:11260):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11260. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,061 [myid:127.0.0.1:11260] - ERROR [main-SendThread(127.0.0.1:11260):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11260 2016-07-28 09:30:23,061 [myid:127.0.0.1:11260] - WARN [main-SendThread(127.0.0.1:11260):ClientCnxn$SendThread@1235] - Session 0x222adcf0da20000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,211 [myid:127.0.0.1:11263] - INFO [main-SendThread(127.0.0.1:11263):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11263. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,211 [myid:127.0.0.1:11263] - ERROR [main-SendThread(127.0.0.1:11263):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11263 2016-07-28 09:30:23,211 [myid:127.0.0.1:11263] - WARN [main-SendThread(127.0.0.1:11263):ClientCnxn$SendThread@1235] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,231 [myid:127.0.0.1:11303] - INFO [main-SendThread(127.0.0.1:11303):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11303. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,231 [myid:127.0.0.1:11276] - INFO [main-SendThread(127.0.0.1:11276):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11276. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,231 [myid:127.0.0.1:11303] - ERROR [main-SendThread(127.0.0.1:11303):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11303 2016-07-28 09:30:23,231 [myid:127.0.0.1:11303] - WARN [main-SendThread(127.0.0.1:11303):ClientCnxn$SendThread@1235] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,231 [myid:127.0.0.1:11276] - ERROR [main-SendThread(127.0.0.1:11276):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11276 2016-07-28 09:30:23,232 [myid:127.0.0.1:11276] - WARN [main-SendThread(127.0.0.1:11276):ClientCnxn$SendThread@1235] - Session 0x322add08d0d0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,341 [myid:127.0.0.1:11257] - INFO [main-SendThread(127.0.0.1:11257):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11257. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,341 [myid:127.0.0.1:11257] - ERROR [main-SendThread(127.0.0.1:11257):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11257 2016-07-28 09:30:23,341 [myid:127.0.0.1:11257] - WARN [main-SendThread(127.0.0.1:11257):ClientCnxn$SendThread@1235] - Session 0x122adcf0ae70000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,351 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.4,name2=Follower] 2016-07-28 09:30:23,351 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1115) 2016-07-28 09:30:23,351 [myid:] - WARN [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-07-28 09:30:23,351 [myid:] - WARN [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-07-28 09:30:23,351 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.4] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.1] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.2] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.3] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.5] 2016-07-28 09:30:23,352 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled) 2016-07-28 09:30:23,352 [myid:] - INFO [main:Leader@617] - Shutting down 2016-07-28 09:30:23,352 [myid:] - INFO [main:Leader@623] - Shutdown called java.lang.Exception: shutdown Leader! reason: quorum Peer shutdown at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:623) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1181) at org.apache.zookeeper.test.QuorumUtil.shutdown(QuorumUtil.java:251) at org.apache.zookeeper.test.QuorumUtil.shutdownAll(QuorumUtil.java:238) at org.apache.zookeeper.test.QuorumUtil.tearDown(QuorumUtil.java:306) at org.apache.zookeeper.test.ReconfigTest.tearDown(ReconfigTest.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-28 09:30:23,353 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-28 09:30:23,353 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-28 09:30:23,353 [myid:] - INFO [main:LeaderRequestProcessor@77] - Shutting down 2016-07-28 09:30:23,353 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-28 09:30:23,353 [myid:] - INFO [LearnerCnxAcceptor-/127.0.0.1:11384:Leader$LearnerCnxAcceptor@373] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2016-07-28 09:30:23,353 [myid:] - INFO [ProcessThread(sid:5 cport:-1)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-28 09:30:23,353 [myid:] - INFO [main:ProposalRequestProcessor@88] - Shutting down 2016-07-28 09:30:23,354 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-07-28 09:30:23,354 [myid:] - INFO [CommitProcessor:5:CommitProcessor@299] - CommitProcessor exited loop! 2016-07-28 09:30:23,354 [myid:] - INFO [main:Leader$ToBeAppliedRequestProcessor@918] - Shutting down 2016-07-28 09:30:23,354 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-28 09:30:23,354 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-28 09:30:23,354 [myid:] - INFO [SyncThread:5:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-28 09:30:23,355 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.5,name2=Leader,name3=InMemoryDataTree] 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51992:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:51992 ******** 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51977:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:51977 ******** 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51992:LearnerHandler@903] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:901) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:622) 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51976:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:51976 ******** 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51975:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:51975 ******** 2016-07-28 09:30:23,356 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-28 09:30:23,356 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51977:LearnerHandler@903] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:901) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:622) 2016-07-28 09:30:23,356 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11383:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-28 09:30:23,356 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:23,356 [myid:] - WARN [LearnerHandler-/127.0.0.1:51975:LearnerHandler@903] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:901) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:622) 2016-07-28 09:30:23,356 [myid:] - WARN [LearnerHandler-/127.0.0.1:51976:LearnerHandler@903] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:901) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:622) 2016-07-28 09:30:23,357 [myid:] - INFO [/127.0.0.1:11385:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-28 09:30:23,357 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.5,name2=Leader] 2016-07-28 09:30:23,358 [myid:] - INFO [main:QuorumUtil@254] - Shutting down leader election QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled) 2016-07-28 09:30:23,358 [myid:] - WARN [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):QuorumPeer@1127] - Unexpected exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:561) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1124) 2016-07-28 09:30:23,358 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):Leader@617] - Shutting down 2016-07-28 09:30:23,358 [myid:] - WARN [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-07-28 09:30:23,358 [myid:] - WARN [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-07-28 09:30:23,358 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled) to exit thread 2016-07-28 09:30:23,358 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5] 2016-07-28 09:30:23,359 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.5] 2016-07-28 09:30:23,359 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.1] 2016-07-28 09:30:23,359 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.2] 2016-07-28 09:30:23,359 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.3] 2016-07-28 09:30:23,359 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11371 2016-07-28 09:30:23,359 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11371 is no longer accepting client connections 2016-07-28 09:30:23,359 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11374 2016-07-28 09:30:23,360 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11374 is no longer accepting client connections 2016-07-28 09:30:23,360 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11377 2016-07-28 09:30:23,360 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11377 is no longer accepting client connections 2016-07-28 09:30:23,360 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11380 2016-07-28 09:30:23,360 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11380 is no longer accepting client connections 2016-07-28 09:30:23,360 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11383 2016-07-28 09:30:23,360 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11383 is no longer accepting client connections 2016-07-28 09:30:23,361 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testRemoveOneAsynchronous 2016-07-28 09:30:23,361 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testRemoveOneAsynchronous {noformat} {noformat} Failed org.apache.zookeeper.test.ReconfigTest.testPortChangeToBlockedPortLeader Failing for the past 5 builds (Since Failed#1241 ) Took 1 min 38 sec. Error Message client could not connect to reestablished quorum: giving up after 30+ seconds. Stacktrace junit.framework.AssertionFailedError: client could not connect to reestablished quorum: giving up after 30+ seconds. at org.apache.zookeeper.test.ReconfigTest.testNormalOperation(ReconfigTest.java:173) at org.apache.zookeeper.test.ReconfigTest.testPortChangeToBlockedPort(ReconfigTest.java:732) at org.apache.zookeeper.test.ReconfigTest.testPortChangeToBlockedPortLeader(ReconfigTest.java:662) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) Standard Output 2016-07-28 09:24:37,121 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-07-28 09:24:37,185 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-07-28 09:24:37,200 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testQuorumSystemChange 2016-07-28 09:24:37,202 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testQuorumSystemChange 2016-07-28 09:24:37,577 [myid:] - INFO [main:PortAssignment@157] - Single test process using ports from 11221 - 32767. 2016-07-28 09:24:37,577 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11222 from range 11221 - 32767. 2016-07-28 09:24:37,579 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11223 from range 11221 - 32767. 2016-07-28 09:24:37,579 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11224 from range 11221 - 32767. 2016-07-28 09:24:37,580 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11225 from range 11221 - 32767. 2016-07-28 09:24:37,581 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11226 from range 11221 - 32767. 2016-07-28 09:24:37,581 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11227 from range 11221 - 32767. 2016-07-28 09:24:37,581 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11228 from range 11221 - 32767. 2016-07-28 09:24:37,582 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11229 from range 11221 - 32767. 2016-07-28 09:24:37,582 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11230 from range 11221 - 32767. 2016-07-28 09:24:37,582 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11231 from range 11221 - 32767. 2016-07-28 09:24:37,582 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11232 from range 11221 - 32767. 2016-07-28 09:24:37,584 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11233 from range 11221 - 32767. 2016-07-28 09:24:37,585 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11234 from range 11221 - 32767. 2016-07-28 09:24:37,585 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11235 from range 11221 - 32767. 2016-07-28 09:24:37,585 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11236 from range 11221 - 32767. 2016-07-28 09:24:37,585 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11237 from range 11221 - 32767. 2016-07-28 09:24:37,586 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11238 from range 11221 - 32767. 2016-07-28 09:24:37,586 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11239 from range 11221 - 32767. 2016-07-28 09:24:37,586 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11240 from range 11221 - 32767. 2016-07-28 09:24:37,587 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11241 from range 11221 - 32767. 2016-07-28 09:24:37,587 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11242 from range 11221 - 32767. 2016-07-28 09:24:37,587 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 1; public port 11222 2016-07-28 09:24:37,609 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,615 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11222 2016-07-28 09:24:37,638 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 2; public port 11225 2016-07-28 09:24:37,638 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,639 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11225 2016-07-28 09:24:37,639 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 3; public port 11228 2016-07-28 09:24:37,640 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,640 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11228 2016-07-28 09:24:37,641 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 4; public port 11231 2016-07-28 09:24:37,641 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,641 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11231 2016-07-28 09:24:37,642 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 5; public port 11234 2016-07-28 09:24:37,642 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,642 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11234 2016-07-28 09:24:37,643 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 6; public port 11237 2016-07-28 09:24:37,643 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,644 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11237 2016-07-28 09:24:37,644 [myid:] - INFO [main:QuorumUtil@116] - Creating QuorumPeer 7; public port 11240 2016-07-28 09:24:37,644 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,645 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11240 2016-07-28 09:24:37,646 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,646 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,646 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,646 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,647 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,648 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer 2016-07-28 09:24:37,649 [myid:] - INFO [main:QuorumUtil@257] - No election available to shutdown QuorumPeer 2016-07-28 09:24:37,649 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer to exit thread 2016-07-28 09:24:37,651 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-28 09:24:37,651 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11222 is no longer accepting client connections 2016-07-28 09:24:37,651 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11225 2016-07-28 09:24:37,652 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11225 is no longer accepting client connections 2016-07-28 09:24:37,652 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11228 2016-07-28 09:24:37,652 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11228 is no longer accepting client connections 2016-07-28 09:24:37,652 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11231 2016-07-28 09:24:37,653 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11231 is no longer accepting client connections 2016-07-28 09:24:37,653 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11234 2016-07-28 09:24:37,653 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11234 is no longer accepting client connections 2016-07-28 09:24:37,653 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11237 2016-07-28 09:24:37,653 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11237 is no longer accepting client connections 2016-07-28 09:24:37,654 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11240 2016-07-28 09:24:37,654 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11240 is no longer accepting client connections 2016-07-28 09:24:37,654 [myid:] - INFO [main:QuorumUtil@203] - Creating QuorumPeer 1; public port 11222 2016-07-28 09:24:37,654 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,655 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11222 2016-07-28 09:24:37,660 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,708 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,754 [myid:] - INFO [main:QuorumUtil@146] - Started QuorumPeer 1 2016-07-28 09:24:37,754 [myid:] - INFO [main:QuorumUtil@203] - Creating QuorumPeer 2; public port 11225 2016-07-28 09:24:37,755 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,755 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11225 2016-07-28 09:24:37,756 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,759 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11224 2016-07-28 09:24:37,771 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,780 [myid:] - INFO [main:QuorumUtil@146] - Started QuorumPeer 2 2016-07-28 09:24:37,780 [myid:] - INFO [main:QuorumUtil@203] - Creating QuorumPeer 3; public port 11228 2016-07-28 09:24:37,781 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:37,781 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11228 2016-07-28 09:24:37,781 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11227 2016-07-28 09:24:37,888 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,894 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11222)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-07-28 09:24:37,894 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11225)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-07-28 09:24:37,895 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11225)(secure=disabled):FastLeaderElection@894] - New election. My id = 2, proposed zxid=0x0 2016-07-28 09:24:37,895 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11222)(secure=disabled):FastLeaderElection@894] - New election. My id = 1, proposed zxid=0x0 2016-07-28 09:24:37,899 [myid:] - INFO [/127.0.0.1:11227:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:50183 2016-07-28 09:24:37,899 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 1) 2016-07-28 09:24:37,901 [myid:] - INFO [/127.0.0.1:11224:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:50185 2016-07-28 09:24:37,901 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 3 at election address /127.0.0.1:11230 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,905 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,906 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 3 at election address /127.0.0.1:11230 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,906 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,905 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,907 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,908 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,920 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:37,961 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,908 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,962 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,962 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,963 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,963 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,965 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,967 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,968 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:37,968 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 3 at election address /127.0.0.1:11230 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,969 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,969 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,981 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:37,982 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,015 [myid:] - INFO [main:QuorumUtil@146] - Started QuorumPeer 3 2016-07-28 09:24:38,016 [myid:] - INFO [main:QuorumUtil@203] - Creating QuorumPeer 4; public port 11231 2016-07-28 09:24:38,016 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 2016-07-28 09:24:38,017 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11231 2016-07-28 09:24:38,017 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11228)(secure=disabled):QuorumPeer@1033] - LOOKING 2016-07-28 09:24:38,018 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11228)(secure=disabled):FastLeaderElection@894] - New election. My id = 3, proposed zxid=0x0 2016-07-28 09:24:38,018 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-28 09:24:38,020 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11230 2016-07-28 09:24:38,025 [myid:] - INFO [/127.0.0.1:11224:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:50200 2016-07-28 09:24:38,026 [myid:] - INFO [/127.0.0.1:11227:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:50201 2016-07-28 09:24:38,027 [myid:] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,027 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,028 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,029 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,029 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,031 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,031 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,031 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,032 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,032 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,033 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,033 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,032 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,033 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11233 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,034 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,035 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,036 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,037 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-28 09:24:38,037 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,038 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,044 [myid:] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 5 at election address /127.0.0.1:11236 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,045 [myid:] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 6 at election address /127.0.0.1:11239 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-28 09:24:38,045 [myid:] - WARN [WorkerSender[myid=3]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11242 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) ...[truncated 3451538 chars]... mPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1] 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1] 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.2] 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.3] 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=1](plain=/127.0.0.1:11371)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.5] 2016-07-28 09:30:21,332 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled) 2016-07-28 09:30:21,332 [myid:] - INFO [main:Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1184) at org.apache.zookeeper.test.QuorumUtil.shutdown(QuorumUtil.java:251) at org.apache.zookeeper.test.QuorumUtil.shutdownAll(QuorumUtil.java:238) at org.apache.zookeeper.test.QuorumUtil.tearDown(QuorumUtil.java:306) at org.apache.zookeeper.test.ReconfigTest.tearDown(ReconfigTest.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-28 09:30:21,333 [myid:] - INFO [main:LearnerZooKeeperServer@165] - Shutting down 2016-07-28 09:30:21,333 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-28 09:30:21,332 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=Follower] 2016-07-28 09:30:21,333 [myid:] - INFO [main:FollowerRequestProcessor@138] - Shutting down 2016-07-28 09:30:21,333 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1115) 2016-07-28 09:30:21,333 [myid:] - INFO [FollowerRequestProcessor:2:FollowerRequestProcessor@109] - FollowerRequestProcessor exited loop! 2016-07-28 09:30:21,333 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-07-28 09:30:21,334 [myid:] - INFO [CommitProcessor:2:CommitProcessor@299] - CommitProcessor exited loop! 2016-07-28 09:30:21,334 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-28 09:30:21,334 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2,name2=Follower,name3=InMemoryDataTree] 2016-07-28 09:30:21,334 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-28 09:30:21,334 [myid:] - INFO [SyncThread:2:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-28 09:30:21,334 [myid:] - WARN [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-07-28 09:30:21,335 [myid:] - WARN [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-07-28 09:30:21,335 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2] 2016-07-28 09:30:21,335 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-28 09:30:21,335 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.2] 2016-07-28 09:30:21,336 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11374:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-28 09:30:21,337 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.1] 2016-07-28 09:30:21,337 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.3] 2016-07-28 09:30:21,337 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:21,337 [myid:] - INFO [QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id2,name1=replica.5] 2016-07-28 09:30:21,337 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:21,337 [myid:] - INFO [/127.0.0.1:11376:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-28 09:30:21,338 [myid:] - INFO [main:QuorumUtil@254] - Shutting down leader election QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled) 2016-07-28 09:30:21,339 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer[myid=2](plain=/127.0.0.1:11374)(secure=disabled) to exit thread 2016-07-28 09:30:21,339 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled) 2016-07-28 09:30:21,339 [myid:] - INFO [main:Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1184) at org.apache.zookeeper.test.QuorumUtil.shutdown(QuorumUtil.java:251) at org.apache.zookeeper.test.QuorumUtil.shutdownAll(QuorumUtil.java:238) at org.apache.zookeeper.test.QuorumUtil.tearDown(QuorumUtil.java:306) at org.apache.zookeeper.test.ReconfigTest.tearDown(ReconfigTest.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-28 09:30:21,339 [myid:] - INFO [main:LearnerZooKeeperServer@165] - Shutting down 2016-07-28 09:30:21,339 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-28 09:30:21,339 [myid:] - INFO [main:FollowerRequestProcessor@138] - Shutting down 2016-07-28 09:30:21,339 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-07-28 09:30:21,339 [myid:] - INFO [FollowerRequestProcessor:3:FollowerRequestProcessor@109] - FollowerRequestProcessor exited loop! 2016-07-28 09:30:21,339 [myid:] - INFO [CommitProcessor:3:CommitProcessor@299] - CommitProcessor exited loop! 2016-07-28 09:30:21,339 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-28 09:30:21,340 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3,name2=Follower,name3=InMemoryDataTree] 2016-07-28 09:30:21,340 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-28 09:30:21,340 [myid:] - INFO [SyncThread:3:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-28 09:30:21,340 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-28 09:30:21,341 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:21,341 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11377:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-28 09:30:21,342 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:21,342 [myid:] - INFO [/127.0.0.1:11379:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-28 09:30:21,343 [myid:] - INFO [main:QuorumUtil@254] - Shutting down leader election QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled) 2016-07-28 09:30:21,343 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled) to exit thread 2016-07-28 09:30:21,421 [myid:127.0.0.1:11273] - INFO [main-SendThread(127.0.0.1:11273):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11273. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:21,422 [myid:127.0.0.1:11273] - ERROR [main-SendThread(127.0.0.1:11273):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11273 2016-07-28 09:30:21,422 [myid:127.0.0.1:11273] - WARN [main-SendThread(127.0.0.1:11273):ClientCnxn$SendThread@1235] - Session 0x222add090c60000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:21,451 [myid:127.0.0.1:11257] - INFO [main-SendThread(127.0.0.1:11257):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11257. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:21,451 [myid:127.0.0.1:11257] - ERROR [main-SendThread(127.0.0.1:11257):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11257 2016-07-28 09:30:21,451 [myid:127.0.0.1:11257] - WARN [main-SendThread(127.0.0.1:11257):ClientCnxn$SendThread@1235] - Session 0x122adcf0ae70000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:21,671 [myid:127.0.0.1:11276] - INFO [main-SendThread(127.0.0.1:11276):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11276. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:21,671 [myid:127.0.0.1:11276] - ERROR [main-SendThread(127.0.0.1:11276):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11276 2016-07-28 09:30:21,671 [myid:127.0.0.1:11276] - WARN [main-SendThread(127.0.0.1:11276):ClientCnxn$SendThread@1235] - Session 0x322add08d0d0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:21,991 [myid:127.0.0.1:11306] - INFO [main-SendThread(127.0.0.1:11306):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11306. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:21,991 [myid:127.0.0.1:11306] - ERROR [main-SendThread(127.0.0.1:11306):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11306 2016-07-28 09:30:21,991 [myid:127.0.0.1:11306] - WARN [main-SendThread(127.0.0.1:11306):ClientCnxn$SendThread@1235] - Session 0x222add109d00000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,091 [myid:127.0.0.1:11263] - INFO [main-SendThread(127.0.0.1:11263):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11263. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,091 [myid:127.0.0.1:11263] - ERROR [main-SendThread(127.0.0.1:11263):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11263 2016-07-28 09:30:22,091 [myid:127.0.0.1:11263] - WARN [main-SendThread(127.0.0.1:11263):ClientCnxn$SendThread@1235] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,111 [myid:127.0.0.1:11303] - INFO [main-SendThread(127.0.0.1:11303):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11303. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,111 [myid:127.0.0.1:11303] - ERROR [main-SendThread(127.0.0.1:11303):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11303 2016-07-28 09:30:22,111 [myid:127.0.0.1:11303] - WARN [main-SendThread(127.0.0.1:11303):ClientCnxn$SendThread@1235] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,241 [myid:127.0.0.1:11270] - INFO [main-SendThread(127.0.0.1:11270):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11270. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,241 [myid:127.0.0.1:11270] - ERROR [main-SendThread(127.0.0.1:11270):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11270 2016-07-28 09:30:22,241 [myid:127.0.0.1:11270] - WARN [main-SendThread(127.0.0.1:11270):ClientCnxn$SendThread@1235] - Session 0x122add08ccf0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,341 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3,name2=Follower] 2016-07-28 09:30:22,341 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1115) 2016-07-28 09:30:22,341 [myid:] - WARN [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-07-28 09:30:22,341 [myid:] - WARN [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-07-28 09:30:22,341 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3] 2016-07-28 09:30:22,342 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.3] 2016-07-28 09:30:22,342 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.1] 2016-07-28 09:30:22,342 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.2] 2016-07-28 09:30:22,342 [myid:] - INFO [QuorumPeer[myid=3](plain=/127.0.0.1:11377)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id3,name1=replica.5] 2016-07-28 09:30:22,342 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled) 2016-07-28 09:30:22,342 [myid:] - INFO [main:Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1184) at org.apache.zookeeper.test.QuorumUtil.shutdown(QuorumUtil.java:251) at org.apache.zookeeper.test.QuorumUtil.shutdownAll(QuorumUtil.java:238) at org.apache.zookeeper.test.QuorumUtil.tearDown(QuorumUtil.java:306) at org.apache.zookeeper.test.ReconfigTest.tearDown(ReconfigTest.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-28 09:30:22,342 [myid:] - INFO [main:LearnerZooKeeperServer@165] - Shutting down 2016-07-28 09:30:22,342 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-28 09:30:22,343 [myid:] - INFO [main:FollowerRequestProcessor@138] - Shutting down 2016-07-28 09:30:22,343 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-07-28 09:30:22,343 [myid:] - INFO [FollowerRequestProcessor:4:FollowerRequestProcessor@109] - FollowerRequestProcessor exited loop! 2016-07-28 09:30:22,343 [myid:] - INFO [CommitProcessor:4:CommitProcessor@299] - CommitProcessor exited loop! 2016-07-28 09:30:22,343 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-28 09:30:22,344 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.4,name2=Follower,name3=InMemoryDataTree] 2016-07-28 09:30:22,344 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-28 09:30:22,344 [myid:] - INFO [SyncThread:4:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-28 09:30:22,344 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-28 09:30:22,345 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11380:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-28 09:30:22,345 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:22,345 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:22,346 [myid:] - INFO [/127.0.0.1:11382:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-28 09:30:22,346 [myid:] - INFO [main:QuorumUtil@254] - Shutting down leader election QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled) 2016-07-28 09:30:22,346 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled) to exit thread 2016-07-28 09:30:22,521 [myid:127.0.0.1:11309] - INFO [main-SendThread(127.0.0.1:11309):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11309. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,521 [myid:127.0.0.1:11309] - ERROR [main-SendThread(127.0.0.1:11309):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11309 2016-07-28 09:30:22,521 [myid:127.0.0.1:11309] - WARN [main-SendThread(127.0.0.1:11309):ClientCnxn$SendThread@1235] - Session 0x322add109d50000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,631 [myid:127.0.0.1:11273] - INFO [main-SendThread(127.0.0.1:11273):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11273. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:22,631 [myid:127.0.0.1:11273] - ERROR [main-SendThread(127.0.0.1:11273):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11273 2016-07-28 09:30:22,631 [myid:127.0.0.1:11273] - WARN [main-SendThread(127.0.0.1:11273):ClientCnxn$SendThread@1235] - Session 0x222add090c60000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:22,761 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-28 09:30:22,761 [myid:] - INFO [WorkerSender[myid=4]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerSender[myid=1]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-28 09:30:22,771 [myid:] - INFO [WorkerSender[myid=3]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-28 09:30:23,061 [myid:127.0.0.1:11260] - INFO [main-SendThread(127.0.0.1:11260):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11260. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,061 [myid:127.0.0.1:11260] - ERROR [main-SendThread(127.0.0.1:11260):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11260 2016-07-28 09:30:23,061 [myid:127.0.0.1:11260] - WARN [main-SendThread(127.0.0.1:11260):ClientCnxn$SendThread@1235] - Session 0x222adcf0da20000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,211 [myid:127.0.0.1:11263] - INFO [main-SendThread(127.0.0.1:11263):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11263. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,211 [myid:127.0.0.1:11263] - ERROR [main-SendThread(127.0.0.1:11263):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11263 2016-07-28 09:30:23,211 [myid:127.0.0.1:11263] - WARN [main-SendThread(127.0.0.1:11263):ClientCnxn$SendThread@1235] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,231 [myid:127.0.0.1:11303] - INFO [main-SendThread(127.0.0.1:11303):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11303. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,231 [myid:127.0.0.1:11276] - INFO [main-SendThread(127.0.0.1:11276):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11276. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,231 [myid:127.0.0.1:11303] - ERROR [main-SendThread(127.0.0.1:11303):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11303 2016-07-28 09:30:23,231 [myid:127.0.0.1:11303] - WARN [main-SendThread(127.0.0.1:11303):ClientCnxn$SendThread@1235] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,231 [myid:127.0.0.1:11276] - ERROR [main-SendThread(127.0.0.1:11276):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11276 2016-07-28 09:30:23,232 [myid:127.0.0.1:11276] - WARN [main-SendThread(127.0.0.1:11276):ClientCnxn$SendThread@1235] - Session 0x322add08d0d0000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,341 [myid:127.0.0.1:11257] - INFO [main-SendThread(127.0.0.1:11257):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11257. Will not attempt to authenticate using SASL (unknown error) 2016-07-28 09:30:23,341 [myid:127.0.0.1:11257] - ERROR [main-SendThread(127.0.0.1:11257):ClientCnxnSocketNIO@287] - Unable to open socket to 127.0.0.1/127.0.0.1:11257 2016-07-28 09:30:23,341 [myid:127.0.0.1:11257] - WARN [main-SendThread(127.0.0.1:11257):ClientCnxn$SendThread@1235] - Session 0x122adcf0ae70000 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.Net.connect0(Native Method) at sun.nio.ch.Net.connect(Net.java:465) at sun.nio.ch.Net.connect(Net.java:457) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:275) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:285) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1098) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1133) 2016-07-28 09:30:23,351 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.4,name2=Follower] 2016-07-28 09:30:23,351 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1115) 2016-07-28 09:30:23,351 [myid:] - WARN [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-07-28 09:30:23,351 [myid:] - WARN [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-07-28 09:30:23,351 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.4] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.1] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.2] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.3] 2016-07-28 09:30:23,352 [myid:] - INFO [QuorumPeer[myid=4](plain=/127.0.0.1:11380)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id4,name1=replica.5] 2016-07-28 09:30:23,352 [myid:] - INFO [main:QuorumUtil@250] - Shutting down quorum peer QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled) 2016-07-28 09:30:23,352 [myid:] - INFO [main:Leader@617] - Shutting down 2016-07-28 09:30:23,352 [myid:] - INFO [main:Leader@623] - Shutdown called java.lang.Exception: shutdown Leader! reason: quorum Peer shutdown at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:623) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:1181) at org.apache.zookeeper.test.QuorumUtil.shutdown(QuorumUtil.java:251) at org.apache.zookeeper.test.QuorumUtil.shutdownAll(QuorumUtil.java:238) at org.apache.zookeeper.test.QuorumUtil.tearDown(QuorumUtil.java:306) at org.apache.zookeeper.test.ReconfigTest.tearDown(ReconfigTest.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-28 09:30:23,353 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-28 09:30:23,353 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-28 09:30:23,353 [myid:] - INFO [main:LeaderRequestProcessor@77] - Shutting down 2016-07-28 09:30:23,353 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-28 09:30:23,353 [myid:] - INFO [LearnerCnxAcceptor-/127.0.0.1:11384:Leader$LearnerCnxAcceptor@373] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2016-07-28 09:30:23,353 [myid:] - INFO [ProcessThread(sid:5 cport:-1)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-28 09:30:23,353 [myid:] - INFO [main:ProposalRequestProcessor@88] - Shutting down 2016-07-28 09:30:23,354 [myid:] - INFO [main:CommitProcessor@414] - Shutting down 2016-07-28 09:30:23,354 [myid:] - INFO [CommitProcessor:5:CommitProcessor@299] - CommitProcessor exited loop! 2016-07-28 09:30:23,354 [myid:] - INFO [main:Leader$ToBeAppliedRequestProcessor@918] - Shutting down 2016-07-28 09:30:23,354 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-28 09:30:23,354 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-28 09:30:23,354 [myid:] - INFO [SyncThread:5:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-28 09:30:23,355 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.5,name2=Leader,name3=InMemoryDataTree] 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51992:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:51992 ******** 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51977:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:51977 ******** 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51992:LearnerHandler@903] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:901) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:622) 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51976:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:51976 ******** 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51975:LearnerHandler@619] - ******* GOODBYE /127.0.0.1:51975 ******** 2016-07-28 09:30:23,356 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-28 09:30:23,356 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:23,355 [myid:] - WARN [LearnerHandler-/127.0.0.1:51977:LearnerHandler@903] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:901) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:622) 2016-07-28 09:30:23,356 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:/127.0.0.1:11383:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-28 09:30:23,356 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-1:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-28 09:30:23,356 [myid:] - WARN [LearnerHandler-/127.0.0.1:51975:LearnerHandler@903] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:901) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:622) 2016-07-28 09:30:23,356 [myid:] - WARN [LearnerHandler-/127.0.0.1:51976:LearnerHandler@903] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:901) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:622) 2016-07-28 09:30:23,357 [myid:] - INFO [/127.0.0.1:11385:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-28 09:30:23,357 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.5,name2=Leader] 2016-07-28 09:30:23,358 [myid:] - INFO [main:QuorumUtil@254] - Shutting down leader election QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled) 2016-07-28 09:30:23,358 [myid:] - WARN [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):QuorumPeer@1127] - Unexpected exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:561) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1124) 2016-07-28 09:30:23,358 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):Leader@617] - Shutting down 2016-07-28 09:30:23,358 [myid:] - WARN [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):QuorumPeer@1158] - PeerState set to LOOKING 2016-07-28 09:30:23,358 [myid:] - WARN [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):QuorumPeer@1140] - QuorumPeer main thread exited 2016-07-28 09:30:23,358 [myid:] - INFO [main:QuorumUtil@259] - Waiting for QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled) to exit thread 2016-07-28 09:30:23,358 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5] 2016-07-28 09:30:23,359 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.5] 2016-07-28 09:30:23,359 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.1] 2016-07-28 09:30:23,359 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.2] 2016-07-28 09:30:23,359 [myid:] - INFO [QuorumPeer[myid=5](plain=/127.0.0.1:11383)(secure=disabled):MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id5,name1=replica.3] 2016-07-28 09:30:23,359 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11371 2016-07-28 09:30:23,359 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11371 is no longer accepting client connections 2016-07-28 09:30:23,359 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11374 2016-07-28 09:30:23,360 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11374 is no longer accepting client connections 2016-07-28 09:30:23,360 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11377 2016-07-28 09:30:23,360 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11377 is no longer accepting client connections 2016-07-28 09:30:23,360 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11380 2016-07-28 09:30:23,360 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11380 is no longer accepting client connections 2016-07-28 09:30:23,360 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11383 2016-07-28 09:30:23,360 [myid:] - INFO [main:QuorumUtil@243] - 127.0.0.1:11383 is no longer accepting client connections 2016-07-28 09:30:23,361 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testRemoveOneAsynchronous 2016-07-28 09:30:23,361 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testRemoveOneAsynchronous {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 34 weeks ago | 0|i31mu7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2492 | gethostname return error before Win32WSAStartup on windows. |
Bug | Patch Available | Trivial | Unresolved | spooky000 | spooky000 | spooky000 | 27/Jul/16 22:09 | 05/Feb/20 07:12 | 3.5.2 | 3.7.0, 3.5.8 | 0 | 2 | windows | gethostname return error before Win32WSAStartup on windows. in log_env function. gethostname(buf, sizeof(buf)); LOG_INFO(LOGCALLBACK(zh), "Client environment:host.name=%s", buf); buf will be uninitialized buffer. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 1 year, 45 weeks ago | 0|i31lr3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2491 | C client build error in vs 2015 |
Bug | Resolved | Minor | Duplicate | Andrew Schwartzmeyer | spooky000 | spooky000 | 27/Jul/16 21:59 | 19/Dec/19 18:01 | 27/Jul/17 18:03 | 3.5.2 | 3.5.4 | c client | 0 | 4 | ZOOKEEPER-2841 | windows vs 2015 | Visual Studio 2015 supports snprintf. #define snprintf _snprintf throw error. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 2 years, 34 weeks ago | 0|i31lqv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2490 | infinitely connect on windows |
Bug | Patch Available | Major | Unresolved | spooky000 | spooky000 | spooky000 | 27/Jul/16 02:17 | 28/Feb/20 16:56 | 3.5.2 | 3.7.0, 3.5.8 | c client | 0 | 3 | 0 | 600 | ZOOKEEPER-3726 | Windows | in addrvec_contains function this memcmp always return false on windows release build. for (i = 0; i < avec->count; i++) { if(memcmp(&avec->data[i], addr, INET_ADDRSTRLEN) == 0) return 1; } because.. #define INET_ADDRSTRLEN 16 on linux. #define INET_ADDRSTRLEN 22 on windows. |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 2 years, 36 weeks ago | 0|i31jvz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2489 | Upgrade Jetty dependency to a recent stable release version. |
Improvement | Closed | Major | Fixed | Edward Ribeiro | Michael Han | Michael Han | 26/Jul/16 16:54 | 17/May/17 23:44 | 24/Aug/16 20:25 | 3.4.8, 3.5.2 | 3.5.3, 3.6.0 | server | 0 | 4 | ZOOKEEPER-2512 | Jetty was added as one of dependencies in ZOOKEEPER-1346 in 2011, and have not been updated since then. The version we are using in trunk is 6.1.26 which was released in 2010. We should consider upgrade Jetty to a recent stable release version (probably one of the 9.x.). Note: this JIRA issue is recreated from https://issues.apache.org/jira/browse/ZOOKEEPER-2427, which was deleted a couple of weeks ago as part of Apache Infrastructure spam fighting effort. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 30 weeks ago |
Reviewed
|
0|i31ja7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2488 | Unsynchronized access to shuttingDownLE in QuorumPeer |
Bug | Open | Major | Unresolved | gaoshu | Michael Han | Michael Han | 25/Jul/16 19:05 | 05/Feb/20 07:16 | 3.5.2 | 3.7.0, 3.5.8 | server | 0 | 5 | 0 | 2400 | Access to shuttingDownLE in QuorumPeer is not synchronized here: https://github.com/apache/zookeeper/blob/3c37184e83a3e68b73544cebccf9388eea26f523/src/java/main/org/apache/zookeeper/server/quorum/QuorumPeer.java#L1066 https://github.com/apache/zookeeper/blob/3c37184e83a3e68b73544cebccf9388eea26f523/src/java/main/org/ The access should be synchronized as the same variable might be accessed in QuormPeer::restartLeaderElection, which is synchronized. |
100% | 100% | 2400 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 16 weeks, 3 days ago | 0|i31hgv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2487 | Test failed because of 'Forked Java VM exited abnormally.' |
Test | Open | Major | Unresolved | Michael Han | Michael Han | Michael Han | 25/Jul/16 13:06 | 20/Nov/19 05:41 | 3.4.8, 3.4.11 | tests | 0 | 2 | ZOOKEEPER-2135 | Sometimes tests fail with error message like this: Error Message: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit. Stack Trace: junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit. Examples: https://builds.apache.org/job/ZooKeeper-trunk-solaris/1239/ https://builds.apache.org/job/ZooKeeper_branch34_openjdk7/1147/ https://builds.apache.org/job/ZooKeeper_branch34_openjdk7/1129/ The failure happen on all platforms (jdk7/8/solaris) of branch 3.4; branch 3.5 looks ok in general. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 17 weeks, 1 day ago | 0|i31gyv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2486 | ZOOKEEPER-3170 Flaky Test: org.apache.zookeeper.test.QuorumZxidSyncTest.testBehindLeader |
Sub-task | Resolved | Major | Cannot Reproduce | Andor Molnar | Michael Han | Michael Han | 21/Jul/16 18:44 | 25/Oct/18 10:55 | 25/Oct/18 10:55 | 3.4.8 | 3.5.2 | tests | 0 | 2 | ZOOKEEPER-2135 | From https://builds.apache.org/job/ZooKeeper_branch34_jdk7/1156 {noformat} Error Message waiting for server up Stacktrace junit.framework.AssertionFailedError: waiting for server up at org.apache.zookeeper.test.QuorumBase.startServers(QuorumBase.java:183) at org.apache.zookeeper.test.QuorumBase.startServers(QuorumBase.java:113) at org.apache.zookeeper.test.QuorumZxidSyncTest.testBehindLeader(QuorumZxidSyncTest.java:67) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) Standard Output 2016-07-21 08:11:45,722 [myid:] - INFO [main:PortAssignment@32] - assigning port 11221 2016-07-21 08:11:45,729 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testBehindLeader 2016-07-21 08:11:45,729 [myid:] - INFO [main:QuorumBase@69] - QuorumBase.setup null 2016-07-21 08:11:45,926 [myid:] - INFO [main:PortAssignment@32] - assigning port 11222 2016-07-21 08:11:45,926 [myid:] - INFO [main:PortAssignment@32] - assigning port 11223 2016-07-21 08:11:45,927 [myid:] - INFO [main:PortAssignment@32] - assigning port 11224 2016-07-21 08:11:45,927 [myid:] - INFO [main:PortAssignment@32] - assigning port 11225 2016-07-21 08:11:45,927 [myid:] - INFO [main:PortAssignment@32] - assigning port 11226 2016-07-21 08:11:45,928 [myid:] - INFO [main:PortAssignment@32] - assigning port 11227 2016-07-21 08:11:45,928 [myid:] - INFO [main:PortAssignment@32] - assigning port 11228 2016-07-21 08:11:45,928 [myid:] - INFO [main:PortAssignment@32] - assigning port 11229 2016-07-21 08:11:45,928 [myid:] - INFO [main:PortAssignment@32] - assigning port 11230 2016-07-21 08:11:45,929 [myid:] - INFO [main:PortAssignment@32] - assigning port 11231 2016-07-21 08:11:45,929 [myid:] - INFO [main:QuorumBase@93] - Ports are: 127.0.0.1:11222,127.0.0.1:11223,127.0.0.1:11224,127.0.0.1:11225,127.0.0.1:11226 2016-07-21 08:11:45,946 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 08:11:45,946 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 08:11:45,948 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 08:11:45,948 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 08:11:45,949 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 08:11:45,949 [myid:] - INFO [main:QuorumBase@142] - creating QuorumPeer 1 port 11222 2016-07-21 08:11:45,961 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 08:11:45,979 [myid:] - INFO [main:QuorumBase@145] - creating QuorumPeer 2 port 11223 2016-07-21 08:11:45,979 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11223 2016-07-21 08:11:45,980 [myid:] - INFO [main:QuorumBase@148] - creating QuorumPeer 3 port 11224 2016-07-21 08:11:45,980 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11224 2016-07-21 08:11:45,981 [myid:] - INFO [main:QuorumBase@151] - creating QuorumPeer 4 port 11225 2016-07-21 08:11:45,981 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11225 2016-07-21 08:11:45,981 [myid:] - INFO [main:QuorumBase@154] - creating QuorumPeer 5 port 11226 2016-07-21 08:11:45,982 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11226 2016-07-21 08:11:45,982 [myid:] - INFO [main:QuorumBase@163] - QuorumPeer 1 voting view: {1=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@47628981, 2=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@163198c4, 3=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@1224773e, 4=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@181090c0, 5=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@678f3997} 2016-07-21 08:11:45,983 [myid:] - INFO [main:QuorumBase@164] - QuorumPeer 2 voting view: {1=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@47628981, 2=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@163198c4, 3=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@1224773e, 4=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@181090c0, 5=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@678f3997} 2016-07-21 08:11:45,983 [myid:] - INFO [main:QuorumBase@165] - QuorumPeer 3 voting view: {1=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@47628981, 2=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@163198c4, 3=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@1224773e, 4=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@181090c0, 5=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@678f3997} 2016-07-21 08:11:45,983 [myid:] - INFO [main:QuorumBase@166] - QuorumPeer 4 voting view: {1=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@47628981, 2=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@163198c4, 3=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@1224773e, 4=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@181090c0, 5=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@678f3997} 2016-07-21 08:11:45,983 [myid:] - INFO [main:QuorumBase@167] - QuorumPeer 5 voting view: {1=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@47628981, 2=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@163198c4, 3=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@1224773e, 4=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@181090c0, 5=org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@678f3997} 2016-07-21 08:11:45,983 [myid:] - INFO [main:QuorumBase@169] - start QuorumPeer 1 2016-07-21 08:11:45,988 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:45,990 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:45,996 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /127.0.0.1:12227 2016-07-21 08:11:46,001 [myid:] - INFO [main:QuorumBase@171] - start QuorumPeer 2 2016-07-21 08:11:46,002 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:46,003 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:46,004 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /127.0.0.1:12228 2016-07-21 08:11:46,006 [myid:] - INFO [main:QuorumBase@173] - start QuorumPeer 3 2016-07-21 08:11:46,006 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:46,007 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:46,007 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11222:QuorumPeer@774] - LOOKING 2016-07-21 08:11:46,008 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11223:QuorumPeer@774] - LOOKING 2016-07-21 08:11:46,009 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /127.0.0.1:12229 2016-07-21 08:11:46,009 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11222:FastLeaderElection@818] - New election. My id = 1, proposed zxid=0x0 2016-07-21 08:11:46,009 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11223:FastLeaderElection@818] - New election. My id = 2, proposed zxid=0x0 2016-07-21 08:11:46,009 [myid:] - INFO [main:QuorumBase@175] - start QuorumPeer 4 2016-07-21 08:11:46,010 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:46,011 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:46,011 [myid:] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11224:QuorumPeer@774] - LOOKING 2016-07-21 08:11:46,011 [myid:] - INFO [/127.0.0.1:12228:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:38924 2016-07-21 08:11:46,011 [myid:] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11224:FastLeaderElection@818] - New election. My id = 3, proposed zxid=0x0 2016-07-21 08:11:46,011 [myid:] - INFO [/127.0.0.1:12227:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:43415 2016-07-21 08:11:46,011 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (2, 1) 2016-07-21 08:11:46,011 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,013 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (3, 1) 2016-07-21 08:11:46,014 [myid:] - INFO [/127.0.0.1:12229:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:51548 2016-07-21 08:11:46,014 [myid:] - INFO [main:QuorumBase@177] - start QuorumPeer 5 2016-07-21 08:11:46,015 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /127.0.0.1:12230 2016-07-21 08:11:46,015 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:46,015 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@400] - Cannot open channel to 4 at election address /127.0.0.1:12230 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:11:46,018 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,018 [myid:] - INFO [/127.0.0.1:12229:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:51551 2016-07-21 08:11:46,016 [myid:] - INFO [/127.0.0.1:12227:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:43417 2016-07-21 08:11:46,016 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:QuorumPeer@774] - LOOKING 2016-07-21 08:11:46,016 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 08:11:46,019 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:FastLeaderElection@818] - New election. My id = 4, proposed zxid=0x0 2016-07-21 08:11:46,019 [myid:] - INFO [/127.0.0.1:12230:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:46102 2016-07-21 08:11:46,018 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,018 [myid:] - INFO [WorkerSender[myid=2]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (3, 2) 2016-07-21 08:11:46,018 [myid:] - INFO [WorkerSender[myid=1]:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 08:11:46,018 [myid:] - INFO [/127.0.0.1:12228:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:38928 2016-07-21 08:11:46,021 [myid:] - INFO [WorkerSender[myid=2]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (4, 2) 2016-07-21 08:11:46,021 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,021 [myid:] - INFO [/127.0.0.1:12229:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:51553 2016-07-21 08:11:46,020 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,019 [myid:] - INFO [WorkerSender[myid=3]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (4, 3) 2016-07-21 08:11:46,022 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,021 [myid:] - WARN [WorkerSender[myid=2]:QuorumCnxManager@400] - Cannot open channel to 5 at election address /127.0.0.1:12231 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:11:46,021 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,021 [myid:] - WARN [WorkerSender[myid=1]:QuorumCnxManager@400] - Cannot open channel to 5 at election address /127.0.0.1:12231 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:11:46,023 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,023 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,023 [myid:] - INFO [WorkerSender[myid=2]:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 08:11:46,024 [myid:] - INFO [/127.0.0.1:12230:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:46104 2016-07-21 08:11:46,022 [myid:] - INFO [ListenerThread:QuorumCnxManager$Listener@534] - My election bind port: /127.0.0.1:12231 2016-07-21 08:11:46,022 [myid:] - WARN [WorkerSender[myid=3]:QuorumCnxManager@400] - Cannot open channel to 5 at election address /127.0.0.1:12231 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:433) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:11:46,025 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,025 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,025 [myid:] - INFO [main:QuorumBase@179] - started QuorumPeer 5 2016-07-21 08:11:46,026 [myid:] - INFO [main:QuorumBase@181] - Checking ports 127.0.0.1:11222,127.0.0.1:11223,127.0.0.1:11224,127.0.0.1:11225,127.0.0.1:11226 2016-07-21 08:11:46,025 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,025 [myid:] - INFO [/127.0.0.1:12227:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:43428 2016-07-21 08:11:46,027 [myid:] - INFO [/127.0.0.1:12230:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:46109 2016-07-21 08:11:46,025 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,024 [myid:] - INFO [WorkerSender[myid=1]:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 08:11:46,027 [myid:] - INFO [WorkerSender[myid=4]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (5, 4) 2016-07-21 08:11:46,027 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,026 [myid:] - INFO [/127.0.0.1:12228:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:38938 2016-07-21 08:11:46,025 [myid:] - INFO [WorkerSender[myid=3]:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 08:11:46,025 [myid:] - INFO [WorkerSender[myid=2]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (4, 2) 2016-07-21 08:11:46,028 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11222 2016-07-21 08:11:46,028 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@810] - Connection broken for id 2, my id = 4, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:11:46,029 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:11:46,028 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,027 [myid:] - INFO [QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11226:QuorumPeer@774] - LOOKING 2016-07-21 08:11:46,027 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,027 [myid:] - INFO [/127.0.0.1:12231:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:52086 2016-07-21 08:11:46,030 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@810] - Connection broken for id 4, my id = 2, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:11:46,030 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@732] - Exception when using channel: for id 4 my id = 2 error = java.net.SocketException: Broken pipe 2016-07-21 08:11:46,029 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,031 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52363 2016-07-21 08:11:46,029 [myid:] - INFO [/127.0.0.1:12228:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:38940 2016-07-21 08:11:46,029 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (5, 1) 2016-07-21 08:11:46,029 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,032 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,032 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,029 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@732] - Exception when using channel: for id 2 my id = 4 error = java.net.SocketException: Socket closed 2016-07-21 08:11:46,033 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:11:46,029 [myid:] - INFO [WorkerSender[myid=2]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (5, 2) 2016-07-21 08:11:46,028 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,033 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (5, 1) 2016-07-21 08:11:46,032 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,031 [myid:] - INFO [/127.0.0.1:12231:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:52088 2016-07-21 08:11:46,031 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,031 [myid:] - INFO [/127.0.0.1:12230:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:46117 2016-07-21 08:11:46,030 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:11:46,030 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:11:46,030 [myid:] - INFO [WorkerSender[myid=3]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (5, 3) 2016-07-21 08:11:46,030 [myid:] - INFO [QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11226:FastLeaderElection@818] - New election. My id = 5, proposed zxid=0x0 2016-07-21 08:11:46,030 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,034 [myid:] - INFO [/127.0.0.1:12228:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:38947 2016-07-21 08:11:46,034 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,034 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,033 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,035 [myid:] - INFO [/127.0.0.1:12231:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:52089 2016-07-21 08:11:46,035 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,036 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,036 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,037 [myid:] - INFO [/127.0.0.1:12227:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:43442 2016-07-21 08:11:46,036 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,036 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,037 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,037 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,038 [myid:] - INFO [/127.0.0.1:12231:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:52091 2016-07-21 08:11:46,037 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,036 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (5, 1) 2016-07-21 08:11:46,038 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,038 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,038 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,038 [myid:] - INFO [/127.0.0.1:12229:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:51573 2016-07-21 08:11:46,038 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,037 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,039 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,039 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,039 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,040 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,039 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,039 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,039 [myid:] - INFO [/127.0.0.1:12231:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:52093 2016-07-21 08:11:46,040 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,040 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,040 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,040 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,041 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,041 [myid:] - INFO [/127.0.0.1:12227:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:43444 2016-07-21 08:11:46,041 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,041 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@810] - Connection broken for id 1, my id = 5, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.DataInputStream.readFully(DataInputStream.java:195) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:805) 2016-07-21 08:11:46,042 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:11:46,041 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,041 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:11:46,041 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11222:NIOServerCnxn@827] - Processing stat command from /127.0.0.1:52363 2016-07-21 08:11:46,041 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@810] - Connection broken for id 5, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:11:46,043 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:11:46,041 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,041 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,043 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,043 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:11:46,042 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,042 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:11:46,044 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:11:46,042 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,042 [myid:] - INFO [/127.0.0.1:12231:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:52096 2016-07-21 08:11:46,041 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,045 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:11:46,045 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:11:46,045 [myid:] - INFO [Thread-2:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52363 (no session established for client) 2016-07-21 08:11:46,044 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,044 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,044 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,046 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,045 [myid:] - INFO [/127.0.0.1:12227:QuorumCnxManager$Listener@541] - Received connection request /127.0.0.1:43446 2016-07-21 08:11:46,045 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,045 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@810] - Connection broken for id 5, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:11:46,047 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:11:46,045 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@810] - Connection broken for id 1, my id = 5, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:11:46,047 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:11:46,047 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,047 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:11:46,048 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:11:46,046 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,048 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,048 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,049 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,049 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,049 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@600] - Notification: 1 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-07-21 08:11:46,245 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:QuorumPeer@844] - FOLLOWING 2016-07-21 08:11:46,246 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11223:QuorumPeer@844] - FOLLOWING 2016-07-21 08:11:46,247 [myid:] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11224:QuorumPeer@844] - FOLLOWING 2016-07-21 08:11:46,249 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11222:QuorumPeer@844] - FOLLOWING 2016-07-21 08:11:46,249 [myid:] - INFO [QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11226:QuorumPeer@856] - LEADING 2016-07-21 08:11:46,251 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Learner@86] - TCP NoDelay set to: true 2016-07-21 08:11:46,255 [myid:] - INFO [QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11226:Leader@59] - TCP NoDelay set to: true 2016-07-21 08:11:46,259 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:zookeeper.version=3.4.9-SNAPSHOT-1753645, built on 07/21/2016 07:46 GMT 2016-07-21 08:11:46,259 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:host.name=asf907.gq1.ygridcore.net 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:java.version=1.7.0_80 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:java.vendor=Oracle Corporation 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:java.home=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7/jre 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/antlr-2.7.6.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/checkstyle-5.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-beanutils-core-1.7.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-cli-1.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-lang-1.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-logging-1.0.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/google-collections-0.9.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/junit-4.8.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/jline-0.9.94.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/log4j-1.2.16.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/slf4j-api-1.6.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/slf4j-log4j12-1.6.1.jar:/home/jenkins/tools/ant/apache-ant-1.9.4/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:java.io.tmpdir=/tmp 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:java.compiler=<NA> 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:os.name=Linux 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:os.arch=amd64 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11225:Environment@100] - Server environment:os.version=3.13.0-36-lowlatency 2016-07-21 08:11:46,260 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0: ...[truncated 1420555 chars]... ager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:21,678 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:21,672 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@810] - Connection broken for id 1, my id = 5, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:21,679 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:21,672 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:21,679 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:21,679 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:21,679 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:21,678 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:21,679 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:21,677 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:21,680 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:21,675 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:21,680 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:22,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:14:22,317 [myid:] - INFO [/127.0.0.1:12241:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-21 08:14:22,320 [myid:] - INFO [/127.0.0.1:12242:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-21 08:14:22,579 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11233:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850) 2016-07-21 08:14:22,579 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11233:FollowerZooKeeperServer@140] - Shutting down 2016-07-21 08:14:22,579 [myid:] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11233:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:14:22,579 [myid:] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:11233:QuorumPeer@874] - QuorumPeer main thread exited 2016-07-21 08:14:22,582 [myid:] - INFO [main:QuorumBase@306] - Shutting down quorum peer QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11234 2016-07-21 08:14:22,582 [myid:] - INFO [main:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:891) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:307) at org.apache.zookeeper.test.QuorumBase.shutdownServers(QuorumBase.java:298) at org.apache.zookeeper.test.QuorumBase.tearDown(QuorumBase.java:285) at org.apache.zookeeper.test.QuorumZxidSyncTest.tearDown(QuorumZxidSyncTest.java:169) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:532) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1179) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1030) 2016-07-21 08:14:22,582 [myid:] - INFO [main:FollowerZooKeeperServer@140] - Shutting down 2016-07-21 08:14:22,582 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:14:22,582 [myid:] - INFO [main:FollowerRequestProcessor@107] - Shutting down 2016-07-21 08:14:22,582 [myid:] - INFO [main:CommitProcessor@184] - Shutting down 2016-07-21 08:14:22,582 [myid:] - INFO [FollowerRequestProcessor:2:FollowerRequestProcessor@97] - FollowerRequestProcessor exited loop! 2016-07-21 08:14:22,582 [myid:] - INFO [CommitProcessor:2:CommitProcessor@153] - CommitProcessor exited loop! 2016-07-21 08:14:22,582 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:14:22,583 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:14:22,583 [myid:] - INFO [SyncThread:2:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:14:22,584 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11234:NIOServerCnxnFactory@219] - NIOServerCnxn factory exited run method 2016-07-21 08:14:22,585 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:22,585 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@810] - Connection broken for id 2, my id = 4, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:22,585 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:22,585 [myid:] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@810] - Connection broken for id 3, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:22,586 [myid:] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:22,585 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@810] - Connection broken for id 2, my id = 5, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:22,586 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:22,585 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@810] - Connection broken for id 5, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:22,587 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:22,585 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:22,585 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@810] - Connection broken for id 2, my id = 3, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:22,587 [myid:] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:22,585 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:22,587 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:22,585 [myid:] - ERROR [/127.0.0.1:12239:QuorumCnxManager$Listener@547] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:539) 2016-07-21 08:14:22,587 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:22,588 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:22,587 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:22,589 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:22,586 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:22,589 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:22,586 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@810] - Connection broken for id 4, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:22,589 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:22,586 [myid:] - INFO [main:QuorumBase@310] - Shutting down leader election QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11234 2016-07-21 08:14:22,589 [myid:] - INFO [main:QuorumBase@315] - Waiting for QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11234 to exit thread 2016-07-21 08:14:22,585 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:22,590 [myid:] - WARN [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:22,673 [myid:] - INFO [/127.0.0.1:12238:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-21 08:14:23,079 [myid:] - INFO [WorkerSender[myid=3]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-21 08:14:23,080 [myid:] - INFO [WorkerSender[myid=4]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-21 08:14:23,084 [myid:] - INFO [WorkerSender[myid=5]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-21 08:14:23,088 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-21 08:14:23,088 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-21 08:14:23,089 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-21 08:14:23,578 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11234:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850) 2016-07-21 08:14:23,578 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11234:FollowerZooKeeperServer@140] - Shutting down 2016-07-21 08:14:23,578 [myid:] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11234:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:14:23,578 [myid:] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:11234:QuorumPeer@874] - QuorumPeer main thread exited 2016-07-21 08:14:23,579 [myid:] - INFO [main:QuorumBase@306] - Shutting down quorum peer QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11235 2016-07-21 08:14:23,579 [myid:] - INFO [main:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:891) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:307) at org.apache.zookeeper.test.QuorumBase.shutdownServers(QuorumBase.java:299) at org.apache.zookeeper.test.QuorumBase.tearDown(QuorumBase.java:285) at org.apache.zookeeper.test.QuorumZxidSyncTest.tearDown(QuorumZxidSyncTest.java:169) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:532) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1179) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1030) 2016-07-21 08:14:23,579 [myid:] - INFO [main:FollowerZooKeeperServer@140] - Shutting down 2016-07-21 08:14:23,579 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:14:23,580 [myid:] - INFO [main:FollowerRequestProcessor@107] - Shutting down 2016-07-21 08:14:23,580 [myid:] - INFO [main:CommitProcessor@184] - Shutting down 2016-07-21 08:14:23,580 [myid:] - INFO [FollowerRequestProcessor:3:FollowerRequestProcessor@97] - FollowerRequestProcessor exited loop! 2016-07-21 08:14:23,580 [myid:] - INFO [CommitProcessor:3:CommitProcessor@153] - CommitProcessor exited loop! 2016-07-21 08:14:23,580 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:14:23,581 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:14:23,581 [myid:] - INFO [SyncThread:3:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:14:23,582 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11235:NIOServerCnxnFactory@219] - NIOServerCnxn factory exited run method 2016-07-21 08:14:23,583 [myid:] - ERROR [/127.0.0.1:12240:QuorumCnxManager$Listener@547] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:539) 2016-07-21 08:14:23,583 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:23,585 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:23,584 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@810] - Connection broken for id 4, my id = 3, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:23,585 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:23,584 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:23,585 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:23,584 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@810] - Connection broken for id 5, my id = 3, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:23,585 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:23,584 [myid:] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@810] - Connection broken for id 3, my id = 5, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:23,586 [myid:] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:23,584 [myid:] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@810] - Connection broken for id 3, my id = 4, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:23,586 [myid:] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:23,584 [myid:] - INFO [main:QuorumBase@310] - Shutting down leader election QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11235 2016-07-21 08:14:23,586 [myid:] - INFO [main:QuorumBase@315] - Waiting for QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11235 to exit thread 2016-07-21 08:14:23,586 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:23,586 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:23,586 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:23,587 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:23,588 [myid:] - INFO [/127.0.0.1:12239:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-21 08:14:24,355 [myid:] - INFO [WorkerSender[myid=2]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-21 08:14:24,358 [myid:] - INFO [WorkerSender[myid=1]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-21 08:14:24,360 [myid:] - INFO [WorkerSender[myid=3]:FastLeaderElection$Messenger$WorkerSender@438] - WorkerSender is down 2016-07-21 08:14:24,361 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-21 08:14:24,361 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-21 08:14:24,362 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection$Messenger$WorkerReceiver@407] - WorkerReceiver is down 2016-07-21 08:14:24,578 [myid:] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11235:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850) 2016-07-21 08:14:24,578 [myid:] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11235:FollowerZooKeeperServer@140] - Shutting down 2016-07-21 08:14:24,578 [myid:] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11235:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:14:24,578 [myid:] - WARN [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:11235:QuorumPeer@874] - QuorumPeer main thread exited 2016-07-21 08:14:24,579 [myid:] - INFO [main:QuorumBase@306] - Shutting down quorum peer QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11236 2016-07-21 08:14:24,579 [myid:] - INFO [main:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:891) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:307) at org.apache.zookeeper.test.QuorumBase.shutdownServers(QuorumBase.java:300) at org.apache.zookeeper.test.QuorumBase.tearDown(QuorumBase.java:285) at org.apache.zookeeper.test.QuorumZxidSyncTest.tearDown(QuorumZxidSyncTest.java:169) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:532) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1179) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1030) 2016-07-21 08:14:24,580 [myid:] - INFO [main:FollowerZooKeeperServer@140] - Shutting down 2016-07-21 08:14:24,581 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:14:24,581 [myid:] - INFO [main:FollowerRequestProcessor@107] - Shutting down 2016-07-21 08:14:24,581 [myid:] - INFO [main:CommitProcessor@184] - Shutting down 2016-07-21 08:14:24,581 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:14:24,581 [myid:] - INFO [FollowerRequestProcessor:4:FollowerRequestProcessor@97] - FollowerRequestProcessor exited loop! 2016-07-21 08:14:24,581 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:14:24,581 [myid:] - INFO [CommitProcessor:4:CommitProcessor@153] - CommitProcessor exited loop! 2016-07-21 08:14:24,582 [myid:] - INFO [SyncThread:4:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:14:24,583 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11236:NIOServerCnxnFactory@219] - NIOServerCnxn factory exited run method 2016-07-21 08:14:24,585 [myid:] - INFO [main:QuorumBase@310] - Shutting down leader election QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11236 2016-07-21 08:14:24,586 [myid:] - ERROR [/127.0.0.1:12241:QuorumCnxManager$Listener@547] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:539) 2016-07-21 08:14:24,586 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:24,586 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:24,586 [myid:] - INFO [main:QuorumBase@315] - Waiting for QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11236 to exit thread 2016-07-21 08:14:24,586 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@810] - Connection broken for id 4, my id = 5, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:24,587 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:24,585 [myid:] - INFO [/127.0.0.1:12240:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-21 08:14:24,585 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@810] - Connection broken for id 5, my id = 4, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-07-21 08:14:24,588 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-07-21 08:14:24,588 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-07-21 08:14:24,588 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-07-21 08:14:25,579 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11236:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850) 2016-07-21 08:14:25,579 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11236:FollowerZooKeeperServer@140] - Shutting down 2016-07-21 08:14:25,579 [myid:] - INFO [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11236:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:14:25,579 [myid:] - WARN [QuorumPeer[myid=4]/0:0:0:0:0:0:0:0:11236:QuorumPeer@874] - QuorumPeer main thread exited 2016-07-21 08:14:25,579 [myid:] - INFO [main:QuorumBase@306] - Shutting down quorum peer QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11237 2016-07-21 08:14:25,579 [myid:] - INFO [main:Leader@496] - Shutting down 2016-07-21 08:14:25,580 [myid:] - INFO [main:Leader@502] - Shutdown called java.lang.Exception: shutdown Leader! reason: quorum Peer shutdown at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:502) at org.apache.zookeeper.server.quorum.QuorumPeer.shutdown(QuorumPeer.java:888) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:307) at org.apache.zookeeper.test.QuorumBase.shutdownServers(QuorumBase.java:301) at org.apache.zookeeper.test.QuorumBase.tearDown(QuorumBase.java:285) at org.apache.zookeeper.test.QuorumZxidSyncTest.tearDown(QuorumZxidSyncTest.java:169) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:532) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1179) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1030) 2016-07-21 08:14:25,581 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:14:25,581 [myid:] - INFO [LearnerCnxAcceptor-/127.0.0.1:12237:Leader$LearnerCnxAcceptor@325] - exception while shutting down acceptor: java.net.SocketException: Socket closed 2016-07-21 08:14:25,582 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:14:25,582 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:14:25,582 [myid:] - INFO [main:ProposalRequestProcessor@88] - Shutting down 2016-07-21 08:14:25,582 [myid:] - INFO [main:CommitProcessor@184] - Shutting down 2016-07-21 08:14:25,582 [myid:] - INFO [ProcessThread(sid:5 cport:-1)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:14:25,583 [myid:] - INFO [CommitProcessor:5:CommitProcessor@153] - CommitProcessor exited loop! 2016-07-21 08:14:25,582 [myid:] - INFO [main:Leader$ToBeAppliedRequestProcessor@661] - Shutting down 2016-07-21 08:14:25,583 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:14:25,583 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:14:25,583 [myid:] - INFO [SyncThread:5:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:14:25,585 [myid:] - WARN [LearnerHandler-/127.0.0.1:43485:LearnerHandler@644] - ******* GOODBYE /127.0.0.1:43485 ******** 2016-07-21 08:14:25,585 [myid:] - WARN [LearnerHandler-/127.0.0.1:43486:LearnerHandler@644] - ******* GOODBYE /127.0.0.1:43486 ******** 2016-07-21 08:14:25,585 [myid:] - WARN [LearnerHandler-/127.0.0.1:43486:LearnerHandler@656] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:654) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:647) 2016-07-21 08:14:25,585 [myid:] - WARN [LearnerHandler-/127.0.0.1:43485:LearnerHandler@656] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:654) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:647) 2016-07-21 08:14:25,585 [myid:] - WARN [LearnerHandler-/127.0.0.1:43488:LearnerHandler@644] - ******* GOODBYE /127.0.0.1:43488 ******** 2016-07-21 08:14:25,586 [myid:] - INFO [/127.0.0.1:12241:QuorumCnxManager$Listener@560] - Leaving listener 2016-07-21 08:14:25,586 [myid:] - WARN [LearnerHandler-/127.0.0.1:43487:LearnerHandler@644] - ******* GOODBYE /127.0.0.1:43487 ******** 2016-07-21 08:14:25,586 [myid:] - WARN [LearnerHandler-/127.0.0.1:43487:LearnerHandler@656] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:654) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:647) 2016-07-21 08:14:25,586 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11237:NIOServerCnxnFactory@219] - NIOServerCnxn factory exited run method 2016-07-21 08:14:25,586 [myid:] - WARN [LearnerHandler-/127.0.0.1:43488:LearnerHandler@656] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:654) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:647) 2016-07-21 08:14:25,587 [myid:] - WARN [QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11237:QuorumPeer@862] - Unexpected exception java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:456) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:859) 2016-07-21 08:14:25,588 [myid:] - INFO [QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11237:Leader@496] - Shutting down 2016-07-21 08:14:25,588 [myid:] - INFO [main:QuorumBase@310] - Shutting down leader election QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11237 2016-07-21 08:14:25,588 [myid:] - ERROR [/127.0.0.1:12242:QuorumCnxManager$Listener@547] - Exception while listening java.net.SocketException: Socket closed at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) at java.net.ServerSocket.implAccept(ServerSocket.java:530) at java.net.ServerSocket.accept(ServerSocket.java:498) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:539) 2016-07-21 08:14:25,588 [myid:] - INFO [main:QuorumBase@315] - Waiting for QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11237 to exit thread 2016-07-21 08:14:25,588 [myid:] - WARN [QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:11237:QuorumPeer@874] - QuorumPeer main thread exited 2016-07-21 08:14:25,589 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11233 2016-07-21 08:14:25,589 [myid:] - INFO [main:QuorumBase@291] - 127.0.0.1:11233 is no longer accepting client connections 2016-07-21 08:14:25,589 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11234 2016-07-21 08:14:25,589 [myid:] - INFO [main:QuorumBase@291] - 127.0.0.1:11234 is no longer accepting client connections 2016-07-21 08:14:25,590 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11235 2016-07-21 08:14:25,590 [myid:] - INFO [main:QuorumBase@291] - 127.0.0.1:11235 is no longer accepting client connections 2016-07-21 08:14:25,590 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11236 2016-07-21 08:14:25,590 [myid:] - INFO [main:QuorumBase@291] - 127.0.0.1:11236 is no longer accepting client connections 2016-07-21 08:14:25,590 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11237 2016-07-21 08:14:25,590 [myid:] - INFO [main:QuorumBase@291] - 127.0.0.1:11237 is no longer accepting client connections 2016-07-21 08:14:25,592 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testLateLogs 2016-07-21 08:14:25,592 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testLateLogs {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 21 weeks ago | 0|i31cmf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2485 | ZOOKEEPER-3170 Flaky Test: org.apache.zookeeper.test.FourLetterWordsTest.testFourLetterWords |
Sub-task | Closed | Major | Cannot Reproduce | Andor Molnar | Michael Han | Michael Han | 21/Jul/16 18:40 | 19/Dec/19 18:01 | 25/Oct/18 10:38 | 3.4.8 | 3.5.5 | tests | 0 | 2 | ZOOKEEPER-2135 | From https://builds.apache.org/job/ZooKeeper_branch34_jdk7/1156/ {noformat} Error Message test timed out after 30000 milliseconds Stacktrace java.lang.Exception: test timed out after 30000 milliseconds at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:345) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at java.io.PrintStream.write(PrintStream.java:480) at java.io.PrintStream.write(PrintStream.java:480) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291) at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295) at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141) at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229) at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59) at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324) at org.apache.log4j.WriterAppender.append(WriterAppender.java:162) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856) at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:305) at org.apache.zookeeper.test.FourLetterWordsTest.verify(FourLetterWordsTest.java:121) at org.apache.zookeeper.test.FourLetterWordsTest.testFourLetterWords(FourLetterWordsTest.java:52) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) Standard Output 2016-07-21 08:05:43,150 [myid:] - INFO [main:PortAssignment@32] - assigning port 11221 2016-07-21 08:05:43,156 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testDisconnectedAddAuth 2016-07-21 08:05:43,157 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testDisconnectedAddAuth 2016-07-21 08:05:43,178 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.9-SNAPSHOT-1753645, built on 07/21/2016 07:46 GMT 2016-07-21 08:05:43,178 [myid:] - INFO [main:Environment@100] - Server environment:host.name=asf907.gq1.ygridcore.net 2016-07-21 08:05:43,178 [myid:] - INFO [main:Environment@100] - Server environment:java.version=1.7.0_80 2016-07-21 08:05:43,178 [myid:] - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation 2016-07-21 08:05:43,178 [myid:] - INFO [main:Environment@100] - Server environment:java.home=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7/jre 2016-07-21 08:05:43,179 [myid:] - INFO [main:Environment@100] - Server environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/antlr-2.7.6.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/checkstyle-5.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-beanutils-core-1.7.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-cli-1.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-lang-1.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-logging-1.0.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/google-collections-0.9.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/junit-4.8.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/jline-0.9.94.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/log4j-1.2.16.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/slf4j-api-1.6.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/slf4j-log4j12-1.6.1.jar:/home/jenkins/tools/ant/apache-ant-1.9.4/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-07-21 08:05:43,179 [myid:] - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-07-21 08:05:43,179 [myid:] - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp 2016-07-21 08:05:43,179 [myid:] - INFO [main:Environment@100] - Server environment:java.compiler=<NA> 2016-07-21 08:05:43,179 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux 2016-07-21 08:05:43,180 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64 2016-07-21 08:05:43,180 [myid:] - INFO [main:Environment@100] - Server environment:os.version=3.13.0-36-lowlatency 2016-07-21 08:05:43,180 [myid:] - INFO [main:Environment@100] - Server environment:user.name=jenkins 2016-07-21 08:05:43,182 [myid:] - INFO [main:Environment@100] - Server environment:user.home=/home/jenkins 2016-07-21 08:05:43,182 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4 2016-07-21 08:05:43,199 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test4168931007806197633.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test4168931007806197633.junit.dir/version-2 2016-07-21 08:05:43,295 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11221 2016-07-21 08:05:43,387 [myid:] - INFO [main:ACLTest@62] - starting up the zookeeper server .. waiting 2016-07-21 08:05:43,388 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 2016-07-21 08:05:43,420 [myid:] - INFO [New I/O worker #1:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:35232 2016-07-21 08:05:43,422 [myid:] - INFO [New I/O worker #1:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:05:43,431 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.9-SNAPSHOT-1753645, built on 07/21/2016 07:46 GMT 2016-07-21 08:05:43,431 [myid:] - INFO [main:Environment@100] - Client environment:host.name=asf907.gq1.ygridcore.net 2016-07-21 08:05:43,431 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_80 2016-07-21 08:05:43,431 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation 2016-07-21 08:05:43,431 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7/jre 2016-07-21 08:05:43,431 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/antlr-2.7.6.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/checkstyle-5.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-beanutils-core-1.7.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-cli-1.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-lang-1.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/commons-logging-1.0.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/google-collections-0.9.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/junit-4.8.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/jline-0.9.94.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/log4j-1.2.16.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/slf4j-api-1.6.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/lib/slf4j-log4j12-1.6.1.jar:/home/jenkins/tools/ant/apache-ant-1.9.4/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-07-21 08:05:43,432 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2016-07-21 08:05:43,432 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2016-07-21 08:05:43,432 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA> 2016-07-21 08:05:43,432 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux 2016-07-21 08:05:43,432 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64 2016-07-21 08:05:43,433 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.13.0-36-lowlatency 2016-07-21 08:05:43,433 [myid:] - INFO [main:Environment@100] - Client environment:user.name=jenkins 2016-07-21 08:05:43,433 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/home/jenkins 2016-07-21 08:05:43,433 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4 2016-07-21 08:05:43,434 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11221 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ACLTest@14f8e8b 2016-07-21 08:05:43,450 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11221. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:05:43,450 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11221, initiating session 2016-07-21 08:05:43,454 [myid:] - INFO [New I/O worker #2:ZooKeeperServer@900] - Client attempting to establish new session at /127.0.0.1:35233 2016-07-21 08:05:43,458 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.1 2016-07-21 08:05:43,468 [myid:] - INFO [SyncThread:0:ZooKeeperServer@645] - Established session 0x1560c7d251c0000 with negotiated timeout 30000 for client /127.0.0.1:35233 2016-07-21 08:05:43,471 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11221, sessionid = 0x1560c7d251c0000, negotiated timeout = 30000 2016-07-21 08:05:43,474 [myid:] - INFO [main-EventThread:ACLTest@175] - Event:SyncConnected None null 2016-07-21 08:05:43,475 [myid:] - INFO [New I/O worker #2:ZooKeeperServer@924] - got auth packet /127.0.0.1:35233 2016-07-21 08:05:43,475 [myid:] - WARN [main-EventThread:ACLTest@182] - startsignal null 2016-07-21 08:05:43,478 [myid:] - INFO [New I/O worker #2:ZooKeeperServer@958] - auth success /127.0.0.1:35233 2016-07-21 08:05:43,479 [myid:] - INFO [New I/O worker #2:ZooKeeperServer@924] - got auth packet /127.0.0.1:35233 2016-07-21 08:05:43,479 [myid:] - INFO [New I/O worker #2:ZooKeeperServer@958] - auth success /127.0.0.1:35233 2016-07-21 08:05:43,485 [myid:] - INFO [ProcessThread(sid:0 cport:11221)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1560c7d251c0000 2016-07-21 08:05:43,488 [myid:] - WARN [New I/O worker #2:NettyServerCnxnFactory$CnxnChannelHandler@111] - Exception caught [id: 0xbe54f7b5, /127.0.0.1:35233 :> /127.0.0.1:11221] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:479) at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:05:43,587 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1560c7d251c0000 closed 2016-07-21 08:05:43,588 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1560c7d251c0000 2016-07-21 08:05:43,588 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11221 2016-07-21 08:05:43,607 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:05:43,607 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:05:43,607 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:05:43,608 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:05:43,608 [myid:] - INFO [ProcessThread(sid:0 cport:11221)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:05:43,608 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:05:43,608 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:05:43,609 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 2016-07-21 08:05:43,610 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 124233 2016-07-21 08:05:43,611 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 5 2016-07-21 08:05:43,611 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testDisconnectedAddAuth 2016-07-21 08:05:43,611 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testDisconnectedAddAuth 2016-07-21 08:05:43,611 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testDisconnectedAddAuth 2016-07-21 08:05:43,613 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testAcls 2016-07-21 08:05:43,613 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testAcls 2016-07-21 08:05:43,614 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test2000752507502946075.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test2000752507502946075.junit.dir/version-2 2016-07-21 08:05:43,635 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11221 2016-07-21 08:05:43,639 [myid:] - INFO [main:ACLTest@98] - starting up the zookeeper server .. waiting 2016-07-21 08:05:43,639 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 2016-07-21 08:05:43,640 [myid:] - INFO [New I/O worker #34:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:35236 2016-07-21 08:05:43,641 [myid:] - INFO [New I/O worker #34:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:05:43,641 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11221 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ACLTest@114f6322 2016-07-21 08:05:43,642 [myid:] - INFO [main:ACLTest@102] - starting creating acls 2016-07-21 08:05:43,642 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11221. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:05:43,643 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11221, initiating session 2016-07-21 08:05:43,644 [myid:] - INFO [New I/O worker #35:ZooKeeperServer@900] - Client attempting to establish new session at /127.0.0.1:35237 2016-07-21 08:05:43,644 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.1 2016-07-21 08:05:43,648 [myid:] - INFO [SyncThread:0:ZooKeeperServer@645] - Established session 0x1560c7d26540000 with negotiated timeout 30000 for client /127.0.0.1:35237 2016-07-21 08:05:43,648 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11221, sessionid = 0x1560c7d26540000, negotiated timeout = 30000 2016-07-21 08:05:43,648 [myid:] - INFO [main-EventThread:ACLTest@175] - Event:SyncConnected None null 2016-07-21 08:05:43,648 [myid:] - WARN [main-EventThread:ACLTest@182] - startsignal null 2016-07-21 08:05:43,954 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11221 2016-07-21 08:05:43,955 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@1158] - Unable to read additional data from server sessionid 0x1560c7d26540000, likely server has closed socket, closing socket connection and attempting reconnect 2016-07-21 08:05:43,961 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:05:43,961 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:05:43,961 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:05:43,962 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:05:43,962 [myid:] - INFO [ProcessThread(sid:0 cport:11221)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:05:43,962 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:05:43,963 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:05:43,963 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 2016-07-21 08:05:43,964 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test2000752507502946075.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test2000752507502946075.junit.dir/version-2 2016-07-21 08:05:43,972 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11221 2016-07-21 08:05:44,002 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 2016-07-21 08:05:44,004 [myid:] - INFO [New I/O worker #67:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:35240 2016-07-21 08:05:44,004 [myid:] - INFO [New I/O worker #67:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:05:44,055 [myid:] - INFO [main-EventThread:ACLTest@175] - Event:Disconnected None null 2016-07-21 08:05:45,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:05:45,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:05:45,784 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11221. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:05:45,784 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11221, initiating session 2016-07-21 08:05:45,785 [myid:] - INFO [New I/O worker #68:ZooKeeperServer@893] - Client attempting to renew session 0x1560c7d26540000 at /127.0.0.1:35246 2016-07-21 08:05:45,786 [myid:] - INFO [New I/O worker #68:ZooKeeperServer@645] - Established session 0x1560c7d26540000 with negotiated timeout 30000 for client /127.0.0.1:35246 2016-07-21 08:05:45,786 [myid:] - INFO [main-SendThread(127.0.0.1:11221):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11221, sessionid = 0x1560c7d26540000, negotiated timeout = 30000 2016-07-21 08:05:45,786 [myid:] - INFO [main-EventThread:ACLTest@175] - Event:SyncConnected None null 2016-07-21 08:05:45,786 [myid:] - INFO [main-EventThread:ACLTest@179] - startsignal.countDown() 2016-07-21 08:05:45,788 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.ca 2016-07-21 08:05:45,795 [myid:] - INFO [ProcessThread(sid:0 cport:11221)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1560c7d26540000 2016-07-21 08:05:45,795 [myid:] - WARN [New I/O worker #68:NettyServerCnxnFactory$CnxnChannelHandler@111] - Exception caught [id: 0x0ae869d5, /127.0.0.1:35246 :> /127.0.0.1:11221] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:479) at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:05:45,896 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1560c7d26540000 closed 2016-07-21 08:05:45,896 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11221 2016-07-21 08:05:45,896 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1560c7d26540000 2016-07-21 08:05:45,901 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:05:45,901 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:05:45,901 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:05:45,902 [myid:] - INFO [ProcessThread(sid:0 cport:11221)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:05:45,902 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:05:45,902 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:05:45,902 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:05:45,903 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11221 2016-07-21 08:05:45,904 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 81805 2016-07-21 08:05:45,904 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 7 2016-07-21 08:05:45,904 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testAcls 2016-07-21 08:05:45,904 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testAcls 2016-07-21 08:05:45,905 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testAcls 2016-07-21 08:05:45,914 [myid:] - INFO [main:PortAssignment@32] - assigning port 11222 2016-07-21 08:05:45,915 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testAsyncCreate 2016-07-21 08:05:45,916 [myid:] - INFO [main:ClientBase@425] - Initial fdcount is: 33 2016-07-21 08:05:46,035 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:05:46,035 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11222 2016-07-21 08:05:46,042 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11222 2016-07-21 08:05:46,043 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test4160740823067989579.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test4160740823067989579.junit.dir/version-2 2016-07-21 08:05:46,043 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 08:05:46,046 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11222 2016-07-21 08:05:46,047 [myid:] - INFO [New I/O worker #100:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:49831 2016-07-21 08:05:46,047 [myid:] - INFO [New I/O worker #100:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:05:46,048 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:05:46,053 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:05:46,053 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree 2016-07-21 08:05:46,054 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:05:46,054 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11222 2016-07-21 08:05:46,054 [myid:] - INFO [main:ClientBase@439] - Client test setup finished 2016-07-21 08:05:46,054 [myid:] - INFO [main:AsyncOpsTest@47] - Creating client testAsyncCreate 2016-07-21 08:05:46,055 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11222 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@42701c57 2016-07-21 08:05:46,056 [myid:] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:05:46,056 [myid:] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11222, initiating session 2016-07-21 08:05:46,057 [myid:] - INFO [New I/O worker #101:ZooKeeperServer@900] - Client attempting to establish new session at /127.0.0.1:49832 2016-07-21 08:05:46,057 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.1 2016-07-21 08:05:46,060 [myid:] - INFO [SyncThread:0:ZooKeeperServer@645] - Established session 0x1560c7d2fbb0000 with negotiated timeout 30000 for client /127.0.0.1:49832 2016-07-21 08:05:46,060 [myid:] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11222, sessionid = 0x1560c7d2fbb0000, negotiated timeout = 30000 2016-07-21 08:05:46,062 [myid:] - INFO [main:JMXEnv@117] - expect:0x1560c7d2fbb0000 2016-07-21 08:05:46,062 [myid:] - INFO [main:JMXEnv@120] - found:0x1560c7d2fbb0000 org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=Connections,name2=127.0.0.1,name3=0x1560c7d2fbb0000 2016-07-21 08:05:46,063 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testAsyncCreate 2016-07-21 08:05:46,063 [myid:] - INFO [New I/O worker #101:ZooKeeperServer@924] - got auth packet /127.0.0.1:49832 2016-07-21 08:05:46,063 [myid:] - INFO [New I/O worker #101:ZooKeeperServer@958] - auth success /127.0.0.1:49832 2016-07-21 08:05:46,069 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 40629 2016-07-21 08:05:46,069 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 51 2016-07-21 08:05:46,069 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testAsyncCreate 2016-07-21 08:05:46,070 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1560c7d2fbb0000 2016-07-21 08:05:46,070 [myid:] - WARN [New I/O worker #101:NettyServerCnxnFactory$CnxnChannelHandler@111] - Exception caught [id: 0xe25524be, /127.0.0.1:49832 :> /127.0.0.1:11222] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:479) at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:05:46,171 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1560c7d2fbb0000 closed 2016-07-21 08:05:46,171 [myid:] - INFO [main:ClientBase@520] - tearDown starting 2016-07-21 08:05:46,171 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1560c7d2fbb0000 2016-07-21 08:05:46,171 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:05:46,172 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11222 2016-07-21 08:05:46,183 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:05:46,183 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:05:46,183 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:05:46,184 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:05:46,184 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:05:46,184 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:05:46,185 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:05:46,185 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11222 2016-07-21 08:05:46,186 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:05:46,189 [myid:] - INFO [main:ClientBase@545] - fdcount after test is: 34 at start it was 33 2016-07-21 08:05:46,190 [myid:] - INFO [main:ClientBase@547] - sleeping for 20 secs 2016-07-21 08:05:46,190 [myid:] - INFO [main:AsyncOpsTest@60] - Test clients shutting down 2016-07-21 08:05:46,191 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testAsyncCreate 2016-07-21 08:05:46,191 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testAsyncCreate 2016-07-21 08:05:46,192 [myid:] - INFO [main:PortAssignment@32] - assigning port 11223 2016-07-21 08:05:46,193 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testAsyncCreateThree 2016-07-21 08:05:46,193 [myid:] - INFO [main:ClientBase@425] - Initial fdcount is: 34 2016-07-21 08:05:46,204 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:05:46,204 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11223 2016-07-21 08:05:46,212 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11223 2016-07-21 08:05:46,212 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test8798982101480729726.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test8798982101480729726.junit.dir/version-2 2016-07-21 08:05:46,212 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11223 2016-07-21 08:05:46,214 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11223 2016-07-21 08:05:46,215 [myid:] - INFO [New I/O worker #133:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:38741 2016-07-21 08:05:46,215 [myid:] - INFO [New I/O worker #133:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:05:46,216 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:05:46,218 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:05:46,218 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11223,name1=InMemoryDataTree 2016-07-21 08:05:46,218 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:05:46,218 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11223 2016-07-21 08:05:46,218 [myid:] - INFO [main:ClientBase@439] - Client test setup finished 2016-07-21 08:05:46,219 [myid:] - INFO [main:AsyncOpsTest@47] - Creating client testAsyncCreateThree 2016-07-21 08:05:46,219 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11223 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@41327667 2016-07-21 08:05:46,220 [myid:] - INFO [main-SendThread(127.0.0.1:11223):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11223. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:05:46,220 [myid:] - INFO [main-SendThread(127.0.0.1:11223):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11223, initiating session 2016-07-21 08:05:46,221 [myid:] - INFO [New I/O worker #134:ZooKeeperServer@900] - Client attempting to establish new session at /127.0.0.1:38742 2016-07-21 08:05:46,221 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.1 2016-07-21 08:05:46,223 [myid:] - INFO [SyncThread:0:ZooKeeperServer@645] - Established session 0x1560c7d30650000 with negotiated timeout 30000 for client /127.0.0.1:38742 2016-07-21 08:05:46,224 [myid:] - INFO [main-SendThread(127.0.0.1:11223):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11223, sessionid = 0x1560c7d30650000, negotiated timeout = 30000 2016-07-21 08:05:46,226 [myid:] - INFO [main:JMXEnv@117] - expect:0x1560c7d30650000 2016-07-21 08:05:46,226 [myid:] - INFO [main:JMXEnv@120] - found:0x1560c7d30650000 org.apache.ZooKeeperService:name0=StandaloneServer_port11223,name1=Connections,name2=127.0.0.1,name3=0x1560c7d30650000 2016-07-21 08:05:46,226 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testAsyncCreateThree 2016-07-21 08:05:46,227 [myid:] - INFO [New I/O worker #134:ZooKeeperServer@924] - got auth packet /127.0.0.1:38742 2016-07-21 08:05:46,227 [myid:] - INFO [New I/O worker #134:ZooKeeperServer@958] - auth success /127.0.0.1:38742 2016-07-21 08:05:46,230 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 77876 2016-07-21 08:05:46,230 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 52 2016-07-21 08:05:46,230 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testAsyncCreateThree 2016-07-21 08:05:46,231 [myid:] - INFO [ProcessThread(sid:0 cport:11223)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1560c7d30650000 2016-07-21 08:05:46,232 [myid:] - WARN [New I/O worker #134:NettyServerCnxnFactory$CnxnChannelHandler@111] - Exception caught [id: 0x84929b58, /127.0.0.1:38742 :> /127.0.0.1:11223] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:479) at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:05:46,332 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1560c7d30650000 closed 2016-07-21 08:05:46,333 [myid:] - INFO [main:ClientBase@520] - tearDown starting 2016-07-21 08:05:46,333 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:05:46,332 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1560c7d30650000 2016-07-21 08:05:46,333 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11223 2016-07-21 08:05:46,340 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:05:46,341 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:05:46,341 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:05:46,341 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:05:46,341 [myid:] - INFO [ProcessThread(sid:0 cport:11223)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:05:46,342 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:05:46,342 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:05:46,343 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11223 2016-07-21 08:05:46,343 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:05:46,346 [myid:] - INFO [main:ClientBase@545] - fdcount after test is: 34 at start it was 34 2016-07-21 08:05:46,346 [myid:] - INFO [main:AsyncOpsTest@60] - Test clients shutting down 2016-07-21 08:05:46,347 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testAsyncCreateThree 2016-07-21 08:05:46,347 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testAsyncCreateThree 2016-07-21 08:05:46,347 [myid:] - INFO [main:PortAssignment@32] - assigning port 11224 2016-07-21 08:05:46,348 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testAsyncCreateFailure_NodeExists 2016-07-21 08:05:46,348 [myid:] - INFO [main:ClientBase@425] - Initial fdcount is: 34 2016-07-21 08:05:46,355 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:05:46,355 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11224 2016-07-21 08:05:46,362 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11224 2016-07-21 08:05:46,363 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test937004609623311539.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test937004609623311539.junit.dir/version-2 2016-07-21 08:05:46,363 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11224 2016-07-21 08:05:46,364 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11224 2016-07-21 08:05:46,365 [myid:] - INFO [New I/O worker #166:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:54116 2016-07-21 08:05:46,366 [myid:] - INFO [New I/O worker #166:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:05:46,366 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:05:46,368 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:05:46,368 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11224,name1=InMemoryDataTree 2016-07-21 08:05:46,368 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:05:46,369 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11224 2016-07-21 08:05:46,369 [myid:] - INFO [main:ClientBase@439] - Client test setup finished 2016-07-21 08:05:46,369 [myid:] - INFO [main:AsyncOpsTest@47] - Creating client testAsyncCreateFailure_NodeExists 2016-07-21 08:05:46,369 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11224 sessionTimeout=30000 watcher=org.apache.zookeeper.test.ClientBase$CountdownWatcher@78f394a2 2016-07-21 08:05:46,370 [myid:] - INFO [main-SendThread(127.0.0.1:11224):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11224. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:05:46,371 [myid:] - INFO [main-SendThread(127.0.0.1:11224):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11224, initiating session 2016-07-21 08:05:46,371 [myid:] - INFO [New I/O worker #167:ZooKeeperServer@900] - Client attempting to establish new session at /127.0.0.1:54117 2016-07-21 08:05:46,372 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.1 2016-07-21 08:05:46,374 [myid:] - INFO [SyncThread:0:ZooKeeperServer@645] - Established session 0x1560c7d30fb0000 with negotiated timeout 30000 for client /127.0.0.1:54117 2016-07-21 08:05:46,374 [myid:] - INFO [main-SendThread(127.0.0.1:11224):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11224, sessionid = 0x1560c7d30fb0000, negotiated timeout = 30000 2016-07-21 08:05:46,376 [myid:] - INFO [main:JMXEnv@117] - expect:0x1560c7d30fb0000 2016-07-21 08:05:46,377 [myid:] - INFO [main:JMXEnv@120] - found:0x1560c7d30fb0000 org.apache.ZooKeeperService:name0=StandaloneServer_port11224,name1=Connections,name2=127.0.0.1,name3=0x1560c7d30fb0000 2016-07-21 08:05:46,377 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testAsyncCreateFailure_NodeExists 2016-07-21 08:05:46,377 [myid:] - INFO [New I/O worker #167:ZooKeeperServer@924] - got auth packet /127.0.0.1:54117 2016-07-21 08:05:46,378 [myid:] - INFO [New I/O worker #167:ZooKeeperServer@958] - auth success /127.0.0.1:54117 2016-07-21 08:05:46,382 [myid:] - INFO [ProcessThread(sid:0 cport:11224)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x1560c7d30fb0000 type:create cxid:0x2 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/foo Error:KeeperErrorCode = NodeExists for /foo 2016-07-21 08:05:46,382 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 115887 2016-07-21 08:05:46,383 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 53 2016-07-21 08:05:46,383 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testAsyncCreateFailure_NodeExists 2016-07-21 08:05:46,384 [myid:] - INFO [ProcessThread(sid:0 cport:11224)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1560c7d30fb0000 2016-07-21 08:05:46,385 [myid:] - WARN [New I/O worker #167:NettyServerCnxnFactory$CnxnChannelHandler@111] - Exception caught [id: 0x08491bfc, /127.0.0.1:54117 :> /127.0.0.1:11224] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:479) at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:05:46,485 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1560c7d30fb0000 closed 2016-07-21 08:05:46,485 [myid:] - INFO [main:ClientBase@520] - tearDown starting 2016-07-21 08:05:46,485 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:05:46,486 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11224 2016-07-21 08:05:46,485 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1560c7d30fb0000 2016-07-21 08:05:46,495 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:05:46,495 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:05:46,495 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:05:46,496 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:05:46,496 [myid:] - INFO [ProcessThread(sid:0 cport:11224)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:05:46,496 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:05:46,496 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:05:46,498 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11224 2016-07-21 08:05:46,499 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:05:46,502 [myid:] - INFO [main:ClientBase@545] - fdcount after test is: 34 at start it was 34 2016-07-21 08:05:46,502 [myid:] - INFO [main:AsyncOpsTest@60] - Test clients shutting down 2016-07-21 08:05:46,502 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testAsyncCreateFailure_NodeExists 2016-07-21 08:05:46,502 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testAsyncCreateFailure_NodeExists 2016-07-21 08:05:46,503 [myid:] - INFO [main:PortAssignment@32] - assigning port 11225 2016-07-21 08:05:46,503 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testAsyncCreateFailure_NoNode 2016-07-21 08:05:46,504 [myid:] - INFO [main:ClientBase@425] - Initial fdcount is: 34 2016-07-21 08:05:46,514 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:05:46,514 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11225 2016-07-21 08:05:46,525 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11225 2016-07-21 08:05:46,525 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5380464042991622148.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5380464042991622148.junit.dir/version-2 2016-07-21 08:05:46,525 [myid:] - INFO [main:NettyServerCnxnFactory@365] - bindin ...[truncated 880056 chars]... ocessor exited loop! 2016-07-21 08:08:01,738 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:01,739 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:01,740 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11289 2016-07-21 08:08:01,740 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:02,332 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:02,332 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11289 2016-07-21 08:08:02,340 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11289 2016-07-21 08:08:02,341 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test4028127123136724556.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test4028127123136724556.junit.dir/version-2 2016-07-21 08:08:02,341 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11289 2016-07-21 08:08:02,344 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11289 2016-07-21 08:08:02,345 [myid:] - INFO [New I/O worker #2707:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:36112 2016-07-21 08:08:02,345 [myid:] - INFO [New I/O worker #2707:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:02,345 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:02,346 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:02,347 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11289,name1=InMemoryDataTree 2016-07-21 08:08:02,347 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:02,347 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11289 2016-07-21 08:08:03,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:03,274 [myid:] - INFO [main-SendThread(127.0.0.1:11289):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11289. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:03,274 [myid:] - INFO [main-SendThread(127.0.0.1:11289):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11289, initiating session 2016-07-21 08:08:03,275 [myid:] - INFO [New I/O worker #2708:ZooKeeperServer@893] - Client attempting to renew session 0x1560c7f3a630000 at /127.0.0.1:36116 2016-07-21 08:08:03,276 [myid:] - INFO [New I/O worker #2708:ZooKeeperServer@645] - Established session 0x1560c7f3a630000 with negotiated timeout 6000 for client /127.0.0.1:36116 2016-07-21 08:08:03,276 [myid:] - INFO [main-SendThread(127.0.0.1:11289):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11289, sessionid = 0x1560c7f3a630000, negotiated timeout = 6000 2016-07-21 08:08:03,281 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.6 2016-07-21 08:08:03,283 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:03,283 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11289 2016-07-21 08:08:03,284 [myid:] - INFO [main-SendThread(127.0.0.1:11289):ClientCnxn$SendThread@1158] - Unable to read additional data from server sessionid 0x1560c7f3a630000, likely server has closed socket, closing socket connection and attempting reconnect 2016-07-21 08:08:03,293 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:03,295 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:03,296 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:03,296 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:03,296 [myid:] - INFO [ProcessThread(sid:0 cport:11289)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:03,296 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:03,297 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:03,298 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11289 2016-07-21 08:08:03,298 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:03,384 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:03,385 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11289 2016-07-21 08:08:03,392 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11289 2016-07-21 08:08:03,392 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test4028127123136724556.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test4028127123136724556.junit.dir/version-2 2016-07-21 08:08:03,392 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11289 2016-07-21 08:08:03,395 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11289 2016-07-21 08:08:03,396 [myid:] - INFO [New I/O worker #2740:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:36118 2016-07-21 08:08:03,396 [myid:] - INFO [New I/O worker #2740:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:03,397 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:03,398 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:03,398 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11289,name1=InMemoryDataTree 2016-07-21 08:08:03,399 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:03,399 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11289 2016-07-21 08:08:04,956 [myid:] - INFO [main-SendThread(127.0.0.1:11289):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11289. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:04,956 [myid:] - INFO [main-SendThread(127.0.0.1:11289):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11289, initiating session 2016-07-21 08:08:04,957 [myid:] - INFO [New I/O worker #2741:ZooKeeperServer@893] - Client attempting to renew session 0x1560c7f3a630000 at /127.0.0.1:36123 2016-07-21 08:08:04,958 [myid:] - INFO [New I/O worker #2741:ZooKeeperServer@645] - Established session 0x1560c7f3a630000 with negotiated timeout 6000 for client /127.0.0.1:36123 2016-07-21 08:08:04,958 [myid:] - INFO [main-SendThread(127.0.0.1:11289):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11289, sessionid = 0x1560c7f3a630000, negotiated timeout = 6000 2016-07-21 08:08:04,959 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.7 2016-07-21 08:08:05,963 [myid:] - INFO [ProcessThread(sid:0 cport:11289)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1560c7f3a630000 2016-07-21 08:08:05,964 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1560c7f3a630000 closed 2016-07-21 08:08:05,964 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1560c7f3a630000 2016-07-21 08:08:05,965 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 71640 2016-07-21 08:08:05,965 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 65 2016-07-21 08:08:05,965 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testWatcherAutoResetWithLocal 2016-07-21 08:08:05,965 [myid:] - INFO [main:ClientBase@520] - tearDown starting 2016-07-21 08:08:05,965 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:05,965 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11289 2016-07-21 08:08:05,972 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:05,975 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:05,975 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:05,975 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:05,975 [myid:] - INFO [ProcessThread(sid:0 cport:11289)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:05,975 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:05,976 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:05,977 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11289 2016-07-21 08:08:05,978 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:05,981 [myid:] - INFO [main:ClientBase@545] - fdcount after test is: 45 at start it was 41 2016-07-21 08:08:05,981 [myid:] - INFO [main:ClientBase@547] - sleeping for 20 secs 2016-07-21 08:08:05,981 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testWatcherAutoResetWithLocal 2016-07-21 08:08:05,982 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testWatcherAutoResetWithLocal 2016-07-21 08:08:05,983 [myid:] - INFO [main:PortAssignment@32] - assigning port 11290 2016-07-21 08:08:05,983 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testWatcherAutoResetDisabledWithGlobal 2016-07-21 08:08:05,983 [myid:] - INFO [main:ClientBase@425] - Initial fdcount is: 45 2016-07-21 08:08:05,991 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:05,992 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11290 2016-07-21 08:08:06,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:06,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:06,002 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11290 2016-07-21 08:08:06,002 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test6527239161822360616.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test6527239161822360616.junit.dir/version-2 2016-07-21 08:08:06,002 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11290 2016-07-21 08:08:06,004 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11290 2016-07-21 08:08:06,005 [myid:] - INFO [New I/O worker #2773:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:59460 2016-07-21 08:08:06,005 [myid:] - INFO [New I/O worker #2773:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:06,006 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:06,007 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:06,007 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11290,name1=InMemoryDataTree 2016-07-21 08:08:06,007 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:06,007 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11290 2016-07-21 08:08:06,007 [myid:] - INFO [main:ClientBase@439] - Client test setup finished 2016-07-21 08:08:06,008 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testWatcherAutoResetDisabledWithGlobal 2016-07-21 08:08:06,008 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11290 sessionTimeout=5000 watcher=org.apache.zookeeper.test.WatcherTest$MyWatcher@63979eb4 2016-07-21 08:08:06,009 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11290. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:06,009 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11290, initiating session 2016-07-21 08:08:06,010 [myid:] - INFO [New I/O worker #2774:ZooKeeperServer@900] - Client attempting to establish new session at /127.0.0.1:59461 2016-07-21 08:08:06,010 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.1 2016-07-21 08:08:06,012 [myid:] - INFO [SyncThread:0:ZooKeeperServer@645] - Established session 0x1560c7f52730000 with negotiated timeout 6000 for client /127.0.0.1:59461 2016-07-21 08:08:06,012 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11290, sessionid = 0x1560c7f52730000, negotiated timeout = 6000 2016-07-21 08:08:06,013 [myid:] - INFO [main:JMXEnv@117] - expect:0x1560c7f52730000 2016-07-21 08:08:06,013 [myid:] - INFO [main:JMXEnv@120] - found:0x1560c7f52730000 org.apache.ZooKeeperService:name0=StandaloneServer_port11290,name1=Connections,name2=127.0.0.1,name3=0x1560c7f52730000 2016-07-21 08:08:06,018 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:06,019 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11290 2016-07-21 08:08:06,019 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1158] - Unable to read additional data from server sessionid 0x1560c7f52730000, likely server has closed socket, closing socket connection and attempting reconnect 2016-07-21 08:08:06,023 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:06,023 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:06,024 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:06,024 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:06,024 [myid:] - INFO [ProcessThread(sid:0 cport:11290)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:06,024 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:06,026 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:06,027 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11290 2016-07-21 08:08:06,027 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:06,119 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:06,120 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11290 2016-07-21 08:08:06,126 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11290 2016-07-21 08:08:06,127 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test6527239161822360616.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test6527239161822360616.junit.dir/version-2 2016-07-21 08:08:06,127 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11290 2016-07-21 08:08:06,129 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11290 2016-07-21 08:08:06,130 [myid:] - INFO [New I/O worker #2806:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:59463 2016-07-21 08:08:06,130 [myid:] - INFO [New I/O worker #2806:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:06,131 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:06,132 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:06,132 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11290,name1=InMemoryDataTree 2016-07-21 08:08:06,132 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:06,132 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11290 2016-07-21 08:08:07,861 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11290. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:07,861 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11290, initiating session 2016-07-21 08:08:07,862 [myid:] - INFO [New I/O worker #2807:ZooKeeperServer@893] - Client attempting to renew session 0x1560c7f52730000 at /127.0.0.1:59469 2016-07-21 08:08:07,862 [myid:] - INFO [New I/O worker #2807:ZooKeeperServer@645] - Established session 0x1560c7f52730000 with negotiated timeout 6000 for client /127.0.0.1:59469 2016-07-21 08:08:07,862 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11290, sessionid = 0x1560c7f52730000, negotiated timeout = 6000 2016-07-21 08:08:07,864 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.4 2016-07-21 08:08:07,868 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:07,868 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11290 2016-07-21 08:08:07,869 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1158] - Unable to read additional data from server sessionid 0x1560c7f52730000, likely server has closed socket, closing socket connection and attempting reconnect 2016-07-21 08:08:07,872 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:07,872 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:07,872 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:07,873 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:07,873 [myid:] - INFO [ProcessThread(sid:0 cport:11290)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:07,873 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:07,874 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:07,874 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11290 2016-07-21 08:08:07,875 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:07,969 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:07,970 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11290 2016-07-21 08:08:07,978 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11290 2016-07-21 08:08:07,978 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test6527239161822360616.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test6527239161822360616.junit.dir/version-2 2016-07-21 08:08:07,978 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11290 2016-07-21 08:08:07,981 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11290 2016-07-21 08:08:07,982 [myid:] - INFO [New I/O worker #2839:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:59471 2016-07-21 08:08:07,982 [myid:] - INFO [New I/O worker #2839:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:07,983 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:07,984 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:07,985 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11290,name1=InMemoryDataTree 2016-07-21 08:08:07,985 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:07,985 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11290 2016-07-21 08:08:09,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:09,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:09,515 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11290. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:09,515 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11290, initiating session 2016-07-21 08:08:09,516 [myid:] - INFO [New I/O worker #2840:ZooKeeperServer@893] - Client attempting to renew session 0x1560c7f52730000 at /127.0.0.1:59477 2016-07-21 08:08:09,517 [myid:] - INFO [New I/O worker #2840:ZooKeeperServer@645] - Established session 0x1560c7f52730000 with negotiated timeout 6000 for client /127.0.0.1:59477 2016-07-21 08:08:09,517 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11290, sessionid = 0x1560c7f52730000, negotiated timeout = 6000 2016-07-21 08:08:09,521 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.6 2016-07-21 08:08:09,523 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:09,523 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11290 2016-07-21 08:08:09,524 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1158] - Unable to read additional data from server sessionid 0x1560c7f52730000, likely server has closed socket, closing socket connection and attempting reconnect 2016-07-21 08:08:09,530 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:09,530 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:09,530 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:09,530 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:09,531 [myid:] - INFO [ProcessThread(sid:0 cport:11290)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:09,531 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:09,532 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:09,533 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11290 2016-07-21 08:08:09,534 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:09,625 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:09,625 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11290 2016-07-21 08:08:09,631 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11290 2016-07-21 08:08:09,632 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test6527239161822360616.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test6527239161822360616.junit.dir/version-2 2016-07-21 08:08:09,632 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11290 2016-07-21 08:08:09,635 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11290 2016-07-21 08:08:09,636 [myid:] - INFO [New I/O worker #2872:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:59479 2016-07-21 08:08:09,636 [myid:] - INFO [New I/O worker #2872:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:09,637 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:09,638 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:09,638 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11290,name1=InMemoryDataTree 2016-07-21 08:08:09,638 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:09,638 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11290 2016-07-21 08:08:11,158 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11290. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:11,159 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11290, initiating session 2016-07-21 08:08:11,160 [myid:] - INFO [New I/O worker #2873:ZooKeeperServer@893] - Client attempting to renew session 0x1560c7f52730000 at /127.0.0.1:59485 2016-07-21 08:08:11,161 [myid:] - INFO [New I/O worker #2873:ZooKeeperServer@645] - Established session 0x1560c7f52730000 with negotiated timeout 6000 for client /127.0.0.1:59485 2016-07-21 08:08:11,162 [myid:] - INFO [main-SendThread(127.0.0.1:11290):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11290, sessionid = 0x1560c7f52730000, negotiated timeout = 6000 2016-07-21 08:08:11,163 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.7 2016-07-21 08:08:12,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:12,166 [myid:] - INFO [ProcessThread(sid:0 cport:11290)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1560c7f52730000 2016-07-21 08:08:12,167 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1560c7f52730000 closed 2016-07-21 08:08:12,167 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1560c7f52730000 2016-07-21 08:08:12,168 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 93132 2016-07-21 08:08:12,168 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 64 2016-07-21 08:08:12,168 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testWatcherAutoResetDisabledWithGlobal 2016-07-21 08:08:12,168 [myid:] - INFO [main:ClientBase@520] - tearDown starting 2016-07-21 08:08:12,168 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:12,169 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11290 2016-07-21 08:08:12,175 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:12,176 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:12,177 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:12,177 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:12,177 [myid:] - INFO [ProcessThread(sid:0 cport:11290)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:12,177 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:12,178 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:12,179 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11290 2016-07-21 08:08:12,179 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:12,181 [myid:] - INFO [main:ClientBase@545] - fdcount after test is: 45 at start it was 45 2016-07-21 08:08:12,182 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testWatcherAutoResetDisabledWithGlobal 2016-07-21 08:08:12,182 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testWatcherAutoResetDisabledWithGlobal 2016-07-21 08:08:12,182 [myid:] - INFO [main:PortAssignment@32] - assigning port 11291 2016-07-21 08:08:12,182 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testWatcherAutoResetDisabledWithLocal 2016-07-21 08:08:12,183 [myid:] - INFO [main:ClientBase@425] - Initial fdcount is: 45 2016-07-21 08:08:12,191 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:12,191 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11291 2016-07-21 08:08:12,197 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11291 2016-07-21 08:08:12,197 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5697721892495919568.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5697721892495919568.junit.dir/version-2 2016-07-21 08:08:12,198 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11291 2016-07-21 08:08:12,199 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11291 2016-07-21 08:08:12,200 [myid:] - INFO [New I/O worker #2905:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:51830 2016-07-21 08:08:12,200 [myid:] - INFO [New I/O worker #2905:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:12,200 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:12,202 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:12,202 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11291,name1=InMemoryDataTree 2016-07-21 08:08:12,202 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:12,202 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11291 2016-07-21 08:08:12,202 [myid:] - INFO [main:ClientBase@439] - Client test setup finished 2016-07-21 08:08:12,202 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testWatcherAutoResetDisabledWithLocal 2016-07-21 08:08:12,202 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:11291 sessionTimeout=5000 watcher=org.apache.zookeeper.test.WatcherTest$MyWatcher@b901239 2016-07-21 08:08:12,203 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11291. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:12,203 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11291, initiating session 2016-07-21 08:08:12,204 [myid:] - INFO [New I/O worker #2906:ZooKeeperServer@900] - Client attempting to establish new session at /127.0.0.1:51831 2016-07-21 08:08:12,205 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.1 2016-07-21 08:08:12,206 [myid:] - INFO [SyncThread:0:ZooKeeperServer@645] - Established session 0x1560c7f6aa60000 with negotiated timeout 6000 for client /127.0.0.1:51831 2016-07-21 08:08:12,207 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11291, sessionid = 0x1560c7f6aa60000, negotiated timeout = 6000 2016-07-21 08:08:12,208 [myid:] - INFO [main:JMXEnv@117] - expect:0x1560c7f6aa60000 2016-07-21 08:08:12,209 [myid:] - INFO [main:JMXEnv@120] - found:0x1560c7f6aa60000 org.apache.ZooKeeperService:name0=StandaloneServer_port11291,name1=Connections,name2=127.0.0.1,name3=0x1560c7f6aa60000 2016-07-21 08:08:12,214 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:12,214 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11291 2016-07-21 08:08:12,214 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1158] - Unable to read additional data from server sessionid 0x1560c7f6aa60000, likely server has closed socket, closing socket connection and attempting reconnect 2016-07-21 08:08:12,219 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:12,219 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:12,219 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:12,220 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:12,220 [myid:] - INFO [ProcessThread(sid:0 cport:11291)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:12,220 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:12,221 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:12,222 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11291 2016-07-21 08:08:12,223 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:12,315 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:12,315 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11291 2016-07-21 08:08:12,321 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11291 2016-07-21 08:08:12,321 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5697721892495919568.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5697721892495919568.junit.dir/version-2 2016-07-21 08:08:12,321 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11291 2016-07-21 08:08:12,324 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11291 2016-07-21 08:08:12,325 [myid:] - INFO [New I/O worker #2938:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:51833 2016-07-21 08:08:12,325 [myid:] - INFO [New I/O worker #2938:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:12,326 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:12,327 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:12,327 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11291,name1=InMemoryDataTree 2016-07-21 08:08:12,327 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:12,327 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11291 2016-07-21 08:08:13,518 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11291. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:13,518 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11291, initiating session 2016-07-21 08:08:13,519 [myid:] - INFO [New I/O worker #2939:ZooKeeperServer@893] - Client attempting to renew session 0x1560c7f6aa60000 at /127.0.0.1:51839 2016-07-21 08:08:13,520 [myid:] - INFO [New I/O worker #2939:ZooKeeperServer@645] - Established session 0x1560c7f6aa60000 with negotiated timeout 6000 for client /127.0.0.1:51839 2016-07-21 08:08:13,520 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11291, sessionid = 0x1560c7f6aa60000, negotiated timeout = 6000 2016-07-21 08:08:13,521 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.4 2016-07-21 08:08:13,523 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:13,524 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11291 2016-07-21 08:08:13,525 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1158] - Unable to read additional data from server sessionid 0x1560c7f6aa60000, likely server has closed socket, closing socket connection and attempting reconnect 2016-07-21 08:08:13,531 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:13,531 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:13,531 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:13,531 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:13,531 [myid:] - INFO [ProcessThread(sid:0 cport:11291)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:13,531 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:13,532 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:13,533 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11291 2016-07-21 08:08:13,533 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:13,625 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:13,625 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11291 2016-07-21 08:08:13,633 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11291 2016-07-21 08:08:13,634 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5697721892495919568.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5697721892495919568.junit.dir/version-2 2016-07-21 08:08:13,634 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11291 2016-07-21 08:08:13,636 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11291 2016-07-21 08:08:13,637 [myid:] - INFO [New I/O worker #2971:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:51841 2016-07-21 08:08:13,637 [myid:] - INFO [New I/O worker #2971:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:13,637 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:13,639 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:13,639 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11291,name1=InMemoryDataTree 2016-07-21 08:08:13,639 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:13,639 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11291 2016-07-21 08:08:14,760 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11291. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:14,761 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11291, initiating session 2016-07-21 08:08:14,762 [myid:] - INFO [New I/O worker #2972:ZooKeeperServer@893] - Client attempting to renew session 0x1560c7f6aa60000 at /127.0.0.1:51846 2016-07-21 08:08:14,763 [myid:] - INFO [New I/O worker #2972:ZooKeeperServer@645] - Established session 0x1560c7f6aa60000 with negotiated timeout 6000 for client /127.0.0.1:51846 2016-07-21 08:08:14,763 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11291, sessionid = 0x1560c7f6aa60000, negotiated timeout = 6000 2016-07-21 08:08:14,768 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.6 2016-07-21 08:08:14,770 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:14,771 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11291 2016-07-21 08:08:14,771 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1158] - Unable to read additional data from server sessionid 0x1560c7f6aa60000, likely server has closed socket, closing socket connection and attempting reconnect 2016-07-21 08:08:14,777 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:14,777 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:14,778 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:14,779 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:14,779 [myid:] - INFO [ProcessThread(sid:0 cport:11291)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:14,779 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:14,780 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:14,782 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11291 2016-07-21 08:08:14,782 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:14,871 [myid:] - INFO [main:ClientBase@443] - STARTING server 2016-07-21 08:08:14,872 [myid:] - INFO [main:ClientBase@364] - CREATING server instance 127.0.0.1:11291 2016-07-21 08:08:14,880 [myid:] - INFO [main:ClientBase@339] - STARTING server instance 127.0.0.1:11291 2016-07-21 08:08:14,881 [myid:] - INFO [main:ZooKeeperServer@170] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5697721892495919568.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper_branch34_jdk7/branch-3.4/build/test/tmp/test5697721892495919568.junit.dir/version-2 2016-07-21 08:08:14,881 [myid:] - INFO [main:NettyServerCnxnFactory@365] - binding to port 0.0.0.0/0.0.0.0:11291 2016-07-21 08:08:14,884 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11291 2016-07-21 08:08:14,885 [myid:] - INFO [New I/O worker #3004:NettyServerCnxn@632] - Processing stat command from /127.0.0.1:51849 2016-07-21 08:08:14,886 [myid:] - INFO [New I/O worker #3004:NettyServerCnxn$StatCommand@469] - Stat command output 2016-07-21 08:08:14,886 [myid:] - INFO [main:JMXEnv@229] - ensureParent:[InMemoryDataTree, StandaloneServer_port] 2016-07-21 08:08:14,887 [myid:] - INFO [main:JMXEnv@246] - expect:InMemoryDataTree 2016-07-21 08:08:14,887 [myid:] - INFO [main:JMXEnv@250] - found:InMemoryDataTree org.apache.ZooKeeperService:name0=StandaloneServer_port11291,name1=InMemoryDataTree 2016-07-21 08:08:14,887 [myid:] - INFO [main:JMXEnv@246] - expect:StandaloneServer_port 2016-07-21 08:08:14,888 [myid:] - INFO [main:JMXEnv@250] - found:StandaloneServer_port org.apache.ZooKeeperService:name0=StandaloneServer_port11291 2016-07-21 08:08:15,000 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:15,001 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:15,001 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:15,001 [myid:] - INFO [SessionTracker:SessionTrackerImpl@162] - SessionTrackerImpl exited loop! 2016-07-21 08:08:16,075 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1032] - Opening socket connection to server 127.0.0.1/127.0.0.1:11291. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 08:08:16,075 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@876] - Socket connection established to 127.0.0.1/127.0.0.1:11291, initiating session 2016-07-21 08:08:16,076 [myid:] - INFO [New I/O worker #3005:ZooKeeperServer@893] - Client attempting to renew session 0x1560c7f6aa60000 at /127.0.0.1:51852 2016-07-21 08:08:16,077 [myid:] - INFO [New I/O worker #3005:ZooKeeperServer@645] - Established session 0x1560c7f6aa60000 with negotiated timeout 6000 for client /127.0.0.1:51852 2016-07-21 08:08:16,077 [myid:] - INFO [main-SendThread(127.0.0.1:11291):ClientCnxn$SendThread@1299] - Session establishment complete on server 127.0.0.1/127.0.0.1:11291, sessionid = 0x1560c7f6aa60000, negotiated timeout = 6000 2016-07-21 08:08:16,078 [myid:] - INFO [SyncThread:0:FileTxnLog@203] - Creating new log file: log.7 2016-07-21 08:08:17,082 [myid:] - INFO [ProcessThread(sid:0 cport:11291)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1560c7f6aa60000 2016-07-21 08:08:17,083 [myid:] - WARN [New I/O worker #3005:NettyServerCnxnFactory$CnxnChannelHandler@111] - Exception caught [id: 0x3265227a, /127.0.0.1:51852 :> /127.0.0.1:11291] EXCEPTION: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:479) at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:203) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:201) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151) at org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2016-07-21 08:08:17,184 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x1560c7f6aa60000 closed 2016-07-21 08:08:17,184 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x1560c7f6aa60000 2016-07-21 08:08:17,184 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@58] - Memory used 115316 2016-07-21 08:08:17,184 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@63] - Number of threads 63 2016-07-21 08:08:17,184 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@78] - FINISHED TEST METHOD testWatcherAutoResetDisabledWithLocal 2016-07-21 08:08:17,184 [myid:] - INFO [main:ClientBase@520] - tearDown starting 2016-07-21 08:08:17,184 [myid:] - INFO [main:ClientBase@490] - STOPPING server 2016-07-21 08:08:17,185 [myid:] - INFO [main:NettyServerCnxnFactory@342] - shutdown called 0.0.0.0/0.0.0.0:11291 2016-07-21 08:08:17,190 [myid:] - INFO [main:ZooKeeperServer@469] - shutting down 2016-07-21 08:08:17,191 [myid:] - INFO [main:SessionTrackerImpl@225] - Shutting down 2016-07-21 08:08:17,191 [myid:] - INFO [main:PrepRequestProcessor@765] - Shutting down 2016-07-21 08:08:17,191 [myid:] - INFO [main:SyncRequestProcessor@209] - Shutting down 2016-07-21 08:08:17,191 [myid:] - INFO [ProcessThread(sid:0 cport:11291)::PrepRequestProcessor@143] - PrepRequestProcessor exited loop! 2016-07-21 08:08:17,191 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-07-21 08:08:17,192 [myid:] - INFO [main:FinalRequestProcessor@402] - shutdown of request processor complete 2016-07-21 08:08:17,192 [myid:] - INFO [main:FourLetterWordMain@62] - connecting to 127.0.0.1 11291 2016-07-21 08:08:17,193 [myid:] - INFO [main:JMXEnv@146] - ensureOnly:[] 2016-07-21 08:08:17,195 [myid:] - INFO [main:ClientBase@545] - fdcount after test is: 41 at start it was 45 2016-07-21 08:08:17,195 [myid:] - INFO [main:ZKTestCase$1@60] - SUCCEEDED testWatcherAutoResetDisabledWithLocal 2016-07-21 08:08:17,195 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testWatcherAutoResetDisabledWithLocal {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 21 weeks ago | 0|i31cm7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2484 | Flaky Test: org.apache.zookeeper.test.LoadFromLogTest.testLoadFailure |
Test | Closed | Major | Fixed | Michael Han | Michael Han | Michael Han | 21/Jul/16 17:55 | 17/May/17 23:43 | 08/Sep/16 18:11 | 3.4.8, 3.5.2 | 3.5.3, 3.6.0 | server, tests | 0 | 2 | ZOOKEEPER-2135 | From https://builds.apache.org/job/ZooKeeper-trunk-openjdk7/1098/ {noformat} Error Message KeeperErrorCode = ConnectionLoss for /data- Stacktrace org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /data- at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1412) at org.apache.zookeeper.test.LoadFromLogTest.testLoadFailure(LoadFromLogTest.java:157) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) Standard Output 2016-07-21 04:29:02,537 [myid:] - INFO [main:PortAssignment@157] - Single test process using ports from 11221 - 32767. 2016-07-21 04:29:02,551 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11222 from range 11221 - 32767. 2016-07-21 04:29:02,652 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-07-21 04:29:02,719 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-07-21 04:29:02,733 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testReloadSnapshotWithMissingParent 2016-07-21 04:29:02,734 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testReloadSnapshotWithMissingParent 2016-07-21 04:29:02,768 [myid:] - INFO [main:Environment@109] - Server environment:zookeeper.version=3.6.0-SNAPSHOT-1753636, built on 07/21/2016 03:58 GMT 2016-07-21 04:29:02,768 [myid:] - INFO [main:Environment@109] - Server environment:host.name=jenkins-test-3b9 2016-07-21 04:29:02,768 [myid:] - INFO [main:Environment@109] - Server environment:java.version=1.7.0_101 2016-07-21 04:29:02,769 [myid:] - INFO [main:Environment@109] - Server environment:java.vendor=Oracle Corporation 2016-07-21 04:29:02,769 [myid:] - INFO [main:Environment@109] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre 2016-07-21 04:29:02,769 [myid:] - INFO [main:Environment@109] - Server environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jetty-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jetty-util-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/servlet-api-2.5-20081211.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/slf4j-log4j12-1.7.5.jar:/home/jenkins/tools/ant/apache-ant-1.8.2/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-07-21 04:29:02,769 [myid:] - INFO [main:Environment@109] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 2016-07-21 04:29:02,769 [myid:] - INFO [main:Environment@109] - Server environment:java.io.tmpdir=/tmp 2016-07-21 04:29:02,770 [myid:] - INFO [main:Environment@109] - Server environment:java.compiler=<NA> 2016-07-21 04:29:02,770 [myid:] - INFO [main:Environment@109] - Server environment:os.name=Linux 2016-07-21 04:29:02,772 [myid:] - INFO [main:Environment@109] - Server environment:os.arch=amd64 2016-07-21 04:29:02,773 [myid:] - INFO [main:Environment@109] - Server environment:os.version=3.13.0-30-generic 2016-07-21 04:29:02,773 [myid:] - INFO [main:Environment@109] - Server environment:user.name=jenkins 2016-07-21 04:29:02,773 [myid:] - INFO [main:Environment@109] - Server environment:user.home=/home/jenkins 2016-07-21 04:29:02,773 [myid:] - INFO [main:Environment@109] - Server environment:user.dir=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test 2016-07-21 04:29:02,773 [myid:] - INFO [main:Environment@109] - Server environment:os.memory.free=48MB 2016-07-21 04:29:02,773 [myid:] - INFO [main:Environment@109] - Server environment:os.memory.max=455MB 2016-07-21 04:29:02,774 [myid:] - INFO [main:Environment@109] - Server environment:os.memory.total=60MB 2016-07-21 04:29:02,790 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 6000 2016-07-21 04:29:02,790 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 60000 2016-07-21 04:29:02,790 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test393292139818745013.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test393292139818745013.junit.dir/version-2 2016-07-21 04:29:02,811 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:29:02,821 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 04:29:02,835 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test393292139818745013.junit.dir/version-2/snapshot.0 2016-07-21 04:29:02,952 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:02,958 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48675 2016-07-21 04:29:02,967 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:48675 2016-07-21 04:29:02,984 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-07-21 04:29:02,985 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48675 (no session established for client) 2016-07-21 04:29:02,992 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.6.0-SNAPSHOT-1753636, built on 07/21/2016 03:58 GMT 2016-07-21 04:29:02,992 [myid:] - INFO [main:Environment@109] - Client environment:host.name=jenkins-test-3b9 2016-07-21 04:29:02,992 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.7.0_101 2016-07-21 04:29:02,992 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation 2016-07-21 04:29:02,993 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre 2016-07-21 04:29:02,993 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/antlr-2.7.7.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/antlr4-runtime-4.5.1-1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/checkstyle-6.13.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-beanutils-1.9.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-cli-1.3.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-collections-3.2.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-lang3-3.4.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/commons-logging-1.1.1.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/guava-18.0.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/hamcrest-core-1.3.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/junit-4.12.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/lib/mockito-all-1.8.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/classes:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/src/java/lib/ivy-2.4.0.jar:/home/jenkins/tools/ant/latest/lib/ant.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/commons-cli-1.2.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jackson-core-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jackson-mapper-asl-1.9.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/javacc.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jetty-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jetty-util-6.1.26.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/jline-2.11.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/log4j-1.2.17.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/netty-3.10.5.Final.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/servlet-api-2.5-20081211.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/slf4j-api-1.7.5.jar:/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/lib/slf4j-log4j12-1.7.5.jar:/home/jenkins/tools/ant/apache-ant-1.8.2/lib/ant-launcher.jar:/home/jenkins/tools/ant/latest/lib/ant-junit.jar:/home/jenkins/tools/ant/latest/lib/ant-junit4.jar 2016-07-21 04:29:02,993 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 2016-07-21 04:29:02,993 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp 2016-07-21 04:29:02,994 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA> 2016-07-21 04:29:02,994 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux 2016-07-21 04:29:02,994 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64 2016-07-21 04:29:02,994 [myid:] - INFO [main:Environment@109] - Client environment:os.version=3.13.0-30-generic 2016-07-21 04:29:02,994 [myid:] - INFO [main:Environment@109] - Client environment:user.name=jenkins 2016-07-21 04:29:02,994 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/home/jenkins 2016-07-21 04:29:02,995 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test 2016-07-21 04:29:02,995 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=56MB 2016-07-21 04:29:02,995 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=455MB 2016-07-21 04:29:02,995 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=60MB 2016-07-21 04:29:02,998 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=127.0.0.1:11222 sessionTimeout=3000 watcher=org.apache.zookeeper.test.LoadFromLogTest@20772fd3 2016-07-21 04:29:03,020 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 04:29:03,021 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48676 2016-07-21 04:29:03,021 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:48676, server: 127.0.0.1/127.0.0.1:11222 2016-07-21 04:29:03,026 [myid:] - INFO [NIOWorkerThread-2:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:48676 2016-07-21 04:29:03,030 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-07-21 04:29:03,049 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x10002e28feb0000 with negotiated timeout 6000 for client /127.0.0.1:48676 2016-07-21 04:29:03,051 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:11222, sessionid = 0x10002e28feb0000, negotiated timeout = 6000 2016-07-21 04:29:03,244 [myid:] - INFO [main:LoadFromLogTest@553] - Set lastProcessedZxid to 2 2016-07-21 04:29:03,244 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x2 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test393292139818745013.junit.dir/version-2/snapshot.2 2016-07-21 04:29:03,245 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-21 04:29:03,245 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-21 04:29:03,245 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-21 04:29:03,245 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-21 04:29:03,245 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-21 04:29:03,245 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-21 04:29:03,246 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-21 04:29:03,246 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree] 2016-07-21 04:29:03,246 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222] 2016-07-21 04:29:03,247 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-21 04:29:03,247 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=Connections,name2=127.0.0.1,name3=0x10002e28feb0000] 2016-07-21 04:29:03,247 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48676 which had sessionid 0x10002e28feb0000 2016-07-21 04:29:03,247 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-21 04:29:03,248 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1231] - Unable to read additional data from server sessionid 0x10002e28feb0000, likely server has closed socket, closing socket connection and attempting reconnect 2016-07-21 04:29:03,248 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-21 04:29:03,248 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 6000 2016-07-21 04:29:03,249 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 60000 2016-07-21 04:29:03,249 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test393292139818745013.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test393292139818745013.junit.dir/version-2 2016-07-21 04:29:03,249 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:29:03,249 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 04:29:03,251 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test393292139818745013.junit.dir/version-2/snapshot.2 2016-07-21 04:29:03,253 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x5 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test393292139818745013.junit.dir/version-2/snapshot.5 2016-07-21 04:29:03,255 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:03,255 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48677 2016-07-21 04:29:03,256 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:48677 2016-07-21 04:29:03,256 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-07-21 04:29:03,256 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48677 (no session established for client) 2016-07-21 04:29:03,257 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-21 04:29:03,257 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-21 04:29:03,257 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-21 04:29:03,257 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-21 04:29:03,257 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-21 04:29:03,258 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-21 04:29:03,258 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-21 04:29:03,258 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-21 04:29:03,258 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-21 04:29:03,258 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-21 04:29:03,259 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree] 2016-07-21 04:29:03,259 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222] 2016-07-21 04:29:03,259 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 8501 2016-07-21 04:29:03,259 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 8 2016-07-21 04:29:03,260 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testReloadSnapshotWithMissingParent 2016-07-21 04:29:03,260 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testReloadSnapshotWithMissingParent 2016-07-21 04:29:03,260 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testReloadSnapshotWithMissingParent 2016-07-21 04:29:03,262 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testRestoreWithTransactionErrors 2016-07-21 04:29:03,262 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testRestoreWithTransactionErrors 2016-07-21 04:29:03,262 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 6000 2016-07-21 04:29:03,263 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 60000 2016-07-21 04:29:03,263 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test7440274560972421200.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test7440274560972421200.junit.dir/version-2 2016-07-21 04:29:03,263 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:29:03,263 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 04:29:03,264 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test7440274560972421200.junit.dir/version-2/snapshot.0 2016-07-21 04:29:03,265 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:03,266 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48678 2016-07-21 04:29:03,267 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:48678 2016-07-21 04:29:03,267 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-07-21 04:29:03,267 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48678 (no session established for client) 2016-07-21 04:29:03,268 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=127.0.0.1:11222 sessionTimeout=3000 watcher=org.apache.zookeeper.test.LoadFromLogTest@5763013f 2016-07-21 04:29:03,268 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 04:29:03,269 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:48679, server: 127.0.0.1/127.0.0.1:11222 2016-07-21 04:29:03,269 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48679 2016-07-21 04:29:03,270 [myid:] - INFO [NIOWorkerThread-2:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:48679 2016-07-21 04:29:03,270 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-07-21 04:29:03,273 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x10002e291910000 with negotiated timeout 6000 for client /127.0.0.1:48679 2016-07-21 04:29:03,274 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:11222, sessionid = 0x10002e291910000, negotiated timeout = 6000 2016-07-21 04:29:03,472 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x1 zxid:0x2 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,475 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x2 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,477 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x3 zxid:0x4 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,479 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x4 zxid:0x5 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,481 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x5 zxid:0x6 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,483 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x6 zxid:0x7 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,485 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x7 zxid:0x8 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,487 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x8 zxid:0x9 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,489 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x9 zxid:0xa txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,491 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0xa zxid:0xb txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,492 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0xb zxid:0xc txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,494 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0xc zxid:0xd txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,496 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0xd zxid:0xe txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,498 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0xe zxid:0xf txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,499 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0xf zxid:0x10 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,501 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x10 zxid:0x11 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,503 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x11 zxid:0x12 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,505 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x12 zxid:0x13 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,507 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x13 zxid:0x14 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,508 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x14 zxid:0x15 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,510 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x15 zxid:0x16 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,512 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x16 zxid:0x17 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,514 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x17 zxid:0x18 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,516 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x18 zxid:0x19 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,517 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x19 zxid:0x1a txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,519 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x1a zxid:0x1b txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,521 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x1b zxid:0x1c txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,523 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x1c zxid:0x1d txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,524 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x1d zxid:0x1e txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,526 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x1e zxid:0x1f txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,528 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x1f zxid:0x20 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,530 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x20 zxid:0x21 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,531 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x21 zxid:0x22 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,534 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x22 zxid:0x23 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,535 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x23 zxid:0x24 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,537 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x24 zxid:0x25 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,539 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x25 zxid:0x26 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,541 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x26 zxid:0x27 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,542 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x27 zxid:0x28 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,545 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x28 zxid:0x29 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,548 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x29 zxid:0x2a txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,549 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x2a zxid:0x2b txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,551 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x2b zxid:0x2c txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,552 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x2c zxid:0x2d txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,554 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x2d zxid:0x2e txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,556 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x2e zxid:0x2f txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,557 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x2f zxid:0x30 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,559 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x30 zxid:0x31 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,561 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x31 zxid:0x32 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,563 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x32 zxid:0x33 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,564 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x33 zxid:0x34 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,565 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x34 zxid:0x35 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,567 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x35 zxid:0x36 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,569 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x36 zxid:0x37 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,571 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x37 zxid:0x38 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,572 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x38 zxid:0x39 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,574 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x39 zxid:0x3a txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,575 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x3a zxid:0x3b txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,577 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x3b zxid:0x3c txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,578 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x3c zxid:0x3d txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,580 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x3d zxid:0x3e txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,581 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x3e zxid:0x3f txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,583 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x3f zxid:0x40 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,584 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x40 zxid:0x41 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,586 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x41 zxid:0x42 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,587 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x42 zxid:0x43 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,589 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x43 zxid:0x44 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,591 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x44 zxid:0x45 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,592 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x45 zxid:0x46 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,594 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x46 zxid:0x47 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,595 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x47 zxid:0x48 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,597 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x48 zxid:0x49 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,598 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x49 zxid:0x4a txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,600 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x4a zxid:0x4b txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,602 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x4b zxid:0x4c txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,603 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x4c zxid:0x4d txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,605 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x4d zxid:0x4e txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,606 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x4e zxid:0x4f txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,608 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x4f zxid:0x50 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,610 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x50 zxid:0x51 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,611 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x51 zxid:0x52 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,613 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x52 zxid:0x53 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,614 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x53 zxid:0x54 txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,617 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x54 zxid:0x55 txntype:-1 reqpath:n/a Error ...[truncated 64722 chars]... INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x12a zxid:0x12b txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,976 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x12b zxid:0x12c txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,978 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@841] - Got user-level KeeperException when processing sessionid:0x10002e291910000 type:create cxid:0x12c zxid:0x12d txntype:-1 reqpath:n/a Error Path:/invaliddir Error:KeeperErrorCode = NoNode for /invaliddir 2016-07-21 04:29:03,980 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@647] - Processed session termination for sessionid: 0x10002e291910000 2016-07-21 04:29:03,981 [myid:] - INFO [NIOWorkerThread-5:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=Connections,name2=127.0.0.1,name3=0x10002e291910000] 2016-07-21 04:29:03,981 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10002e291910000 2016-07-21 04:29:03,981 [myid:] - INFO [main:ZooKeeper@1313] - Session: 0x10002e291910000 closed 2016-07-21 04:29:03,981 [myid:] - INFO [NIOWorkerThread-5:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48679 which had sessionid 0x10002e291910000 2016-07-21 04:29:04,578 [myid:] - INFO [main:LoadFromLogTest@465] - Set lastProcessedZxid to 292 2016-07-21 04:29:04,579 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x124 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test7440274560972421200.junit.dir/version-2/snapshot.124 2016-07-21 04:29:04,579 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-21 04:29:04,580 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-21 04:29:04,580 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-21 04:29:04,580 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-21 04:29:04,580 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-21 04:29:04,580 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-21 04:29:04,580 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-21 04:29:04,581 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree] 2016-07-21 04:29:04,581 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222] 2016-07-21 04:29:04,582 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-21 04:29:04,582 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-21 04:29:04,583 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-21 04:29:04,583 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 6000 2016-07-21 04:29:04,584 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 60000 2016-07-21 04:29:04,584 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test7440274560972421200.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test7440274560972421200.junit.dir/version-2 2016-07-21 04:29:04,584 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:29:04,584 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 04:29:04,586 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test7440274560972421200.junit.dir/version-2/snapshot.124 2016-07-21 04:29:04,593 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x12e to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test7440274560972421200.junit.dir/version-2/snapshot.12e 2016-07-21 04:29:04,594 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:04,595 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48680 2016-07-21 04:29:04,596 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:48680 2016-07-21 04:29:04,596 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-07-21 04:29:04,596 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48680 (no session established for client) 2016-07-21 04:29:04,597 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-21 04:29:04,597 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-21 04:29:04,597 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-21 04:29:04,598 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-21 04:29:04,598 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-21 04:29:04,598 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-21 04:29:04,598 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-21 04:29:04,598 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-21 04:29:04,598 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-21 04:29:04,599 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-21 04:29:04,599 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree] 2016-07-21 04:29:04,599 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222] 2016-07-21 04:29:04,599 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 16613 2016-07-21 04:29:04,600 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 10 2016-07-21 04:29:04,600 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testRestoreWithTransactionErrors 2016-07-21 04:29:04,600 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testRestoreWithTransactionErrors 2016-07-21 04:29:04,600 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testRestoreWithTransactionErrors 2016-07-21 04:29:04,601 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testPad 2016-07-21 04:29:04,601 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testPad 2016-07-21 04:29:04,601 [myid:] - INFO [main:FileTxnLog@204] - Creating new log file: log.123 2016-07-21 04:29:04,602 [myid:] - INFO [main:LoadFromLogTest@343] - Received magic : 1514884167 Expected : 1514884167 2016-07-21 04:29:04,602 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 16921 2016-07-21 04:29:04,602 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 10 2016-07-21 04:29:04,602 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testPad 2016-07-21 04:29:04,602 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testPad 2016-07-21 04:29:04,603 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testPad 2016-07-21 04:29:04,603 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testLoad 2016-07-21 04:29:04,603 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testLoad 2016-07-21 04:29:04,604 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 6000 2016-07-21 04:29:04,604 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 60000 2016-07-21 04:29:04,604 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6104680937889031731.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6104680937889031731.junit.dir/version-2 2016-07-21 04:29:04,604 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:29:04,651 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 04:29:04,652 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6104680937889031731.junit.dir/version-2/snapshot.0 2016-07-21 04:29:04,654 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:04,654 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48681 2016-07-21 04:29:04,655 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:48681 2016-07-21 04:29:04,656 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-07-21 04:29:04,656 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48681 (no session established for client) 2016-07-21 04:29:04,656 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=127.0.0.1:11222 sessionTimeout=3000 watcher=org.apache.zookeeper.test.LoadFromLogTest@24fa3073 2016-07-21 04:29:04,657 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 04:29:04,658 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48682 2016-07-21 04:29:04,658 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:48682, server: 127.0.0.1/127.0.0.1:11222 2016-07-21 04:29:04,659 [myid:] - INFO [NIOWorkerThread-2:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:48682 2016-07-21 04:29:04,659 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-07-21 04:29:04,663 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x10002e296fd0000 with negotiated timeout 6000 for client /127.0.0.1:48682 2016-07-21 04:29:04,663 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:11222, sessionid = 0x10002e296fd0000, negotiated timeout = 6000 2016-07-21 04:29:04,839 [myid:] - INFO [Snapshot Thread:FileTxnSnapLog@298] - Snapshotting: 0x63 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6104680937889031731.junit.dir/version-2/snapshot.63 2016-07-21 04:29:04,840 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.65 2016-07-21 04:29:04,958 [myid:] - INFO [Snapshot Thread:FileTxnSnapLog@298] - Snapshotting: 0xac to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6104680937889031731.junit.dir/version-2/snapshot.ac 2016-07-21 04:29:04,959 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.ae 2016-07-21 04:29:05,068 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 04:29:05,069 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48683 2016-07-21 04:29:05,069 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:48683, server: 127.0.0.1/127.0.0.1:11222 2016-07-21 04:29:05,070 [myid:] - INFO [NIOWorkerThread-1:ZooKeeperServer@969] - Client attempting to renew session 0x10002e28feb0000 at /127.0.0.1:48683 2016-07-21 04:29:05,070 [myid:] - INFO [NIOWorkerThread-1:ZooKeeperServer@686] - Invalid session 0x10002e28feb0000 for client /127.0.0.1:48683, probably expired 2016-07-21 04:29:05,071 [myid:] - INFO [NIOWorkerThread-4:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48683 which had sessionid 0x10002e28feb0000 2016-07-21 04:29:05,071 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10002e28feb0000 2016-07-21 04:29:05,071 [myid:127.0.0.1:11222] - WARN [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1367] - Unable to reconnect to ZooKeeper service, session 0x10002e28feb0000 has expired 2016-07-21 04:29:05,071 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1227] - Unable to reconnect to ZooKeeper service, session 0x10002e28feb0000 has expired, closing socket connection 2016-07-21 04:29:05,106 [myid:] - INFO [Snapshot Thread:FileTxnSnapLog@298] - Snapshotting: 0x107 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6104680937889031731.junit.dir/version-2/snapshot.107 2016-07-21 04:29:05,107 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.109 2016-07-21 04:29:05,165 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@647] - Processed session termination for sessionid: 0x10002e296fd0000 2016-07-21 04:29:05,166 [myid:] - INFO [main:ZooKeeper@1313] - Session: 0x10002e296fd0000 closed 2016-07-21 04:29:05,167 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10002e296fd0000 2016-07-21 04:29:05,167 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-21 04:29:05,166 [myid:] - INFO [NIOWorkerThread-7:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=Connections,name2=127.0.0.1,name3=0x10002e296fd0000] 2016-07-21 04:29:05,167 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-21 04:29:05,167 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-21 04:29:05,168 [myid:] - INFO [NIOWorkerThread-7:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48682 which had sessionid 0x10002e296fd0000 2016-07-21 04:29:05,168 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-21 04:29:05,168 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-21 04:29:05,168 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-21 04:29:05,169 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-21 04:29:05,169 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-21 04:29:05,169 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-21 04:29:05,169 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-21 04:29:05,169 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree] 2016-07-21 04:29:05,170 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222] 2016-07-21 04:29:05,170 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:05,172 [myid:] - INFO [main:LoadFromLogTest@115] - Txnlog size: 307248 bytes 2016-07-21 04:29:05,186 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 7789 2016-07-21 04:29:05,187 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 11 2016-07-21 04:29:05,187 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testLoad 2016-07-21 04:29:05,187 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testLoad 2016-07-21 04:29:05,187 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testLoad 2016-07-21 04:29:05,188 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testTxnFailure 2016-07-21 04:29:05,188 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testTxnFailure 2016-07-21 04:29:05,188 [myid:] - INFO [main:LoadFromLogTest@245] - Attempting to create /test/3 2016-07-21 04:29:05,189 [myid:] - INFO [main:LoadFromLogTest@281] - Children: 3 2 1 for /test 2016-07-21 04:29:05,189 [myid:] - INFO [main:LoadFromLogTest@282] - (cverions, pzxid): 3, 3 2016-07-21 04:29:05,189 [myid:] - INFO [main:LoadFromLogTest@319] - Children: 3 2 1 for /test 2016-07-21 04:29:05,189 [myid:] - INFO [main:LoadFromLogTest@320] - (cverions, pzxid): 4, 4 2016-07-21 04:29:05,189 [myid:] - INFO [main:LoadFromLogTest@248] - Attempting to create /test/3 2016-07-21 04:29:05,189 [myid:] - INFO [main:LoadFromLogTest@281] - Children: 3 2 1 for /test 2016-07-21 04:29:05,190 [myid:] - INFO [main:LoadFromLogTest@282] - (cverions, pzxid): 4, 4 2016-07-21 04:29:05,190 [myid:] - INFO [main:LoadFromLogTest@319] - Children: 3 2 1 for /test 2016-07-21 04:29:05,190 [myid:] - INFO [main:LoadFromLogTest@320] - (cverions, pzxid): 5, 5 2016-07-21 04:29:05,190 [myid:] - INFO [main:LoadFromLogTest@252] - Attempting to create /test/3 2016-07-21 04:29:05,190 [myid:] - INFO [main:LoadFromLogTest@281] - Children: 3 2 1 for /test 2016-07-21 04:29:05,190 [myid:] - INFO [main:LoadFromLogTest@282] - (cverions, pzxid): 5, 5 2016-07-21 04:29:05,192 [myid:] - INFO [main:LoadFromLogTest@319] - Children: 3 2 1 for /test 2016-07-21 04:29:05,192 [myid:] - INFO [main:LoadFromLogTest@320] - (cverions, pzxid): 6, 6 2016-07-21 04:29:05,192 [myid:] - INFO [main:LoadFromLogTest@256] - Attempting to create /test/3 2016-07-21 04:29:05,193 [myid:] - INFO [main:LoadFromLogTest@281] - Children: 3 2 1 for /test 2016-07-21 04:29:05,193 [myid:] - INFO [main:LoadFromLogTest@282] - (cverions, pzxid): 6, 6 2016-07-21 04:29:05,193 [myid:] - INFO [main:LoadFromLogTest@319] - Children: 3 2 1 for /test 2016-07-21 04:29:05,193 [myid:] - INFO [main:LoadFromLogTest@320] - (cverions, pzxid): 7, 7 2016-07-21 04:29:05,193 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 8013 2016-07-21 04:29:05,193 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 11 2016-07-21 04:29:05,194 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testTxnFailure 2016-07-21 04:29:05,194 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testTxnFailure 2016-07-21 04:29:05,194 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testTxnFailure 2016-07-21 04:29:05,194 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testDatadirAutocreate 2016-07-21 04:29:05,194 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testDatadirAutocreate 2016-07-21 04:29:05,195 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 6000 2016-07-21 04:29:05,195 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 60000 2016-07-21 04:29:05,195 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6847426618122472335.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6847426618122472335.junit.dir/version-2 2016-07-21 04:29:05,195 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:29:05,196 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 04:29:05,197 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6847426618122472335.junit.dir/version-2/snapshot.0 2016-07-21 04:29:05,198 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:05,198 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48685 2016-07-21 04:29:05,199 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:48685 2016-07-21 04:29:05,199 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-07-21 04:29:05,200 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48685 (no session established for client) 2016-07-21 04:29:05,200 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-21 04:29:05,200 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-21 04:29:05,200 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-21 04:29:05,200 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-21 04:29:05,200 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-21 04:29:05,200 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-21 04:29:05,201 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-21 04:29:05,201 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree] 2016-07-21 04:29:05,201 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222] 2016-07-21 04:29:05,202 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-21 04:29:05,202 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-21 04:29:05,202 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-21 04:29:05,202 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:05,203 [myid:] - INFO [main:LoadFromLogTest@517] - Server failed to start - correct behavior org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Missing data directory /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test6323392488588100984.junit.dir/version-2, automatic data directory creation is disabled (zookeeper.datadir.autocreate is false). Please create this directory manually. 2016-07-21 04:29:05,203 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 8605 2016-07-21 04:29:05,203 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 12 2016-07-21 04:29:05,203 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testDatadirAutocreate 2016-07-21 04:29:05,203 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testDatadirAutocreate 2016-07-21 04:29:05,203 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testDatadirAutocreate 2016-07-21 04:29:05,204 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testRestore 2016-07-21 04:29:05,204 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testRestore 2016-07-21 04:29:05,205 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 6000 2016-07-21 04:29:05,432 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-07-21 04:29:05,432 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-07-21 04:29:05,432 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-07-21 04:29:05,432 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-07-21 04:29:05,432 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-07-21 04:29:05,432 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-07-21 04:29:13,168 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 60000 2016-07-21 04:29:13,169 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test4732590081806435620.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test4732590081806435620.junit.dir/version-2 2016-07-21 04:29:13,170 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:29:13,170 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 04:29:13,171 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test4732590081806435620.junit.dir/version-2/snapshot.0 2016-07-21 04:29:13,173 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:13,173 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48687 2016-07-21 04:29:13,175 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:48687 2016-07-21 04:29:13,175 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-07-21 04:29:13,176 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48687 (no session established for client) 2016-07-21 04:29:13,176 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=127.0.0.1:11222 sessionTimeout=3000 watcher=org.apache.zookeeper.test.LoadFromLogTest@3f9e244f 2016-07-21 04:29:13,177 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 04:29:13,177 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:48688, server: 127.0.0.1/127.0.0.1:11222 2016-07-21 04:29:13,177 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48688 2016-07-21 04:29:13,178 [myid:] - INFO [NIOWorkerThread-2:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:48688 2016-07-21 04:29:13,179 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-07-21 04:29:13,182 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x10002e2b8440000 with negotiated timeout 6000 for client /127.0.0.1:48688 2016-07-21 04:29:13,182 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:11222, sessionid = 0x10002e2b8440000, negotiated timeout = 6000 2016-07-21 04:29:13,823 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@647] - Processed session termination for sessionid: 0x10002e2b8440000 2016-07-21 04:29:13,824 [myid:] - INFO [main:ZooKeeper@1313] - Session: 0x10002e2b8440000 closed 2016-07-21 04:29:13,824 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10002e2b8440000 2016-07-21 04:29:13,824 [myid:] - INFO [NIOWorkerThread-7:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=Connections,name2=127.0.0.1,name3=0x10002e2b8440000] 2016-07-21 04:29:13,824 [myid:] - INFO [main:LoadFromLogTest@387] - Set lastProcessedZxid to 293 2016-07-21 04:29:13,825 [myid:] - INFO [NIOWorkerThread-7:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48688 which had sessionid 0x10002e2b8440000 2016-07-21 04:29:13,825 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x125 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test4732590081806435620.junit.dir/version-2/snapshot.125 2016-07-21 04:29:13,830 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-21 04:29:13,830 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-21 04:29:13,830 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-21 04:29:13,830 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-21 04:29:13,830 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-21 04:29:13,830 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-21 04:29:13,831 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-21 04:29:13,831 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree] 2016-07-21 04:29:13,831 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222] 2016-07-21 04:29:13,832 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-21 04:29:13,834 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-21 04:29:13,834 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-21 04:29:13,835 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 6000 2016-07-21 04:29:13,835 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 60000 2016-07-21 04:29:13,835 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test4732590081806435620.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test4732590081806435620.junit.dir/version-2 2016-07-21 04:29:13,836 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:29:13,836 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 04:29:13,837 [myid:] - INFO [main:FileSnap@83] - Reading snapshot /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test4732590081806435620.junit.dir/version-2/snapshot.125 2016-07-21 04:29:13,858 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x12f to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test4732590081806435620.junit.dir/version-2/snapshot.12f 2016-07-21 04:29:13,862 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:14,432 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-07-21 04:29:19,360 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48689 2016-07-21 04:29:19,362 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:48689 2016-07-21 04:29:19,362 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-07-21 04:29:19,362 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48689 (no session established for client) 2016-07-21 04:29:19,363 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=127.0.0.1:11222 sessionTimeout=3000 watcher=org.apache.zookeeper.test.LoadFromLogTest@3f9e244f 2016-07-21 04:29:19,364 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 04:29:19,364 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48690 2016-07-21 04:29:19,364 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:48690, server: 127.0.0.1/127.0.0.1:11222 2016-07-21 04:29:19,365 [myid:] - INFO [NIOWorkerThread-2:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:48690 2016-07-21 04:29:19,366 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.130 2016-07-21 04:29:19,368 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x10002e2baf60000 with negotiated timeout 6000 for client /127.0.0.1:48690 2016-07-21 04:29:19,369 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:11222, sessionid = 0x10002e2baf60000, negotiated timeout = 6000 2016-07-21 04:29:19,574 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@647] - Processed session termination for sessionid: 0x10002e2baf60000 2016-07-21 04:29:19,575 [myid:] - INFO [main:ZooKeeper@1313] - Session: 0x10002e2baf60000 closed 2016-07-21 04:29:19,575 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10002e2baf60000 2016-07-21 04:29:19,575 [myid:] - INFO [NIOWorkerThread-1:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=Connections,name2=127.0.0.1,name3=0x10002e2baf60000] 2016-07-21 04:29:19,575 [myid:] - INFO [main:LoadFromLogTest@420] - Expected /invalidsnap/test-0000000300 found /invalidsnap/test-0000000300 2016-07-21 04:29:19,576 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48690 which had sessionid 0x10002e2baf60000 2016-07-21 04:29:19,576 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxnFactory$ConnectionExpirerThread@583] - ConnnectionExpirerThread interrupted 2016-07-21 04:29:19,578 [myid:] - INFO [NIOServerCxnFactory.SelectorThread-0:NIOServerCnxnFactory$SelectorThread@420] - selector thread exitted run method 2016-07-21 04:29:19,578 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@219] - accept thread exitted run method 2016-07-21 04:29:19,578 [myid:] - INFO [main:ZooKeeperServer@498] - shutting down 2016-07-21 04:29:19,578 [myid:] - INFO [main:SessionTrackerImpl@232] - Shutting down 2016-07-21 04:29:19,578 [myid:] - INFO [main:PrepRequestProcessor@965] - Shutting down 2016-07-21 04:29:19,578 [myid:] - INFO [main:SyncRequestProcessor@191] - Shutting down 2016-07-21 04:29:19,578 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@154] - PrepRequestProcessor exited loop! 2016-07-21 04:29:19,579 [myid:] - INFO [SyncThread:0:SyncRequestProcessor@169] - SyncRequestProcessor exited! 2016-07-21 04:29:19,579 [myid:] - INFO [main:FinalRequestProcessor@479] - shutdown of request processor complete 2016-07-21 04:29:19,579 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=InMemoryDataTree] 2016-07-21 04:29:19,580 [myid:] - INFO [main:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222] 2016-07-21 04:29:19,580 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@82] - Memory used 16954 2016-07-21 04:29:19,580 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@87] - Number of threads 6 2016-07-21 04:29:19,580 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@102] - FINISHED TEST METHOD testRestore 2016-07-21 04:29:19,580 [myid:] - INFO [main:ZKTestCase$1@65] - SUCCEEDED testRestore 2016-07-21 04:29:19,580 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testRestore 2016-07-21 04:29:19,581 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testLoadFailure 2016-07-21 04:29:19,581 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testLoadFailure 2016-07-21 04:29:19,581 [myid:] - INFO [main:ZooKeeperServer@858] - minSessionTimeout set to 6000 2016-07-21 04:29:19,581 [myid:] - INFO [main:ZooKeeperServer@867] - maxSessionTimeout set to 60000 2016-07-21 04:29:19,582 [myid:] - INFO [main:ZooKeeperServer@156] - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test436661564740529918.junit.dir/version-2 snapdir /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test436661564740529918.junit.dir/version-2 2016-07-21 04:29:19,582 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:29:19,582 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 04:29:19,583 [myid:] - INFO [main:FileTxnSnapLog@298] - Snapshotting: 0x0 to /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk-openjdk7/trunk/build/test/tmp/test436661564740529918.junit.dir/version-2/snapshot.0 2016-07-21 04:29:19,584 [myid:] - INFO [main:FourLetterWordMain@85] - connecting to 127.0.0.1 11222 2016-07-21 04:29:19,585 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48691 2016-07-21 04:29:19,586 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@485] - Processing stat command from /127.0.0.1:48691 2016-07-21 04:29:19,586 [myid:] - INFO [NIOWorkerThread-1:StatCommand@49] - Stat command output 2016-07-21 04:29:19,586 [myid:] - INFO [NIOWorkerThread-1:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48691 (no session established for client) 2016-07-21 04:29:19,586 [myid:] - INFO [main:ZooKeeper@855] - Initiating client connection, connectString=127.0.0.1:11222 sessionTimeout=3000 watcher=org.apache.zookeeper.test.LoadFromLogTest@1b2090ab 2016-07-21 04:29:19,587 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 04:29:19,588 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48692 2016-07-21 04:29:19,588 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:48692, server: 127.0.0.1/127.0.0.1:11222 2016-07-21 04:29:20,432 [myid:] - INFO [SessionTracker:SessionTrackerImpl@158] - SessionTrackerImpl exited loop! 2016-07-21 04:29:27,620 [myid:127.0.0.1:11222] - WARN [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1181] - Client session timed out, have not heard from server in 8032ms for sessionid 0x0 2016-07-21 04:29:27,621 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1229] - Client session timed out, have not heard from server in 8032ms for sessionid 0x0, closing socket connection and attempting reconnect 2016-07-21 04:29:27,621 [myid:] - INFO [NIOWorkerThread-2:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:48692 2016-07-21 04:29:27,622 [myid:] - INFO [SyncThread:0:FileTxnLog@204] - Creating new log file: log.1 2016-07-21 04:29:27,624 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x10002e2d1500000 with negotiated timeout 6000 for client /127.0.0.1:48692 2016-07-21 04:29:27,625 [myid:] - WARN [NIOWorkerThread-4:NIOServerCnxn@365] - Unable to read additional data from client sessionid 0x10002e2d1500000, likely client has closed socket 2016-07-21 04:29:27,626 [myid:] - INFO [NIOWorkerThread-4:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=Connections,name2=127.0.0.1,name3=0x10002e2d1500000] 2016-07-21 04:29:27,626 [myid:] - INFO [NIOWorkerThread-4:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48692 which had sessionid 0x10002e2d1500000 2016-07-21 04:29:28,739 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1113] - Opening socket connection to server 127.0.0.1/127.0.0.1:11222. Will not attempt to authenticate using SASL (unknown error) 2016-07-21 04:29:28,740 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11222:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:48693 2016-07-21 04:29:28,740 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:48693, server: 127.0.0.1/127.0.0.1:11222 2016-07-21 04:29:28,741 [myid:] - INFO [NIOWorkerThread-5:ZooKeeperServer@964] - Client attempting to establish new session at /127.0.0.1:48693 2016-07-21 04:29:28,742 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x10002e2d1500001 with negotiated timeout 6000 for client /127.0.0.1:48693 2016-07-21 04:29:28,743 [myid:127.0.0.1:11222] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1381] - Session establishment complete on server 127.0.0.1/127.0.0.1:11222, sessionid = 0x10002e2d1500001, negotiated timeout = 6000 2016-07-21 04:29:28,744 [myid:] - INFO [ProcessThread(sid:0 cport:11222)::PrepRequestProcessor@647] - Processed session termination for sessionid: 0x10002e2d1500001 2016-07-21 04:29:28,746 [myid:] - INFO [NIOWorkerThread-8:MBeanRegistry@128] - Unregister MBean [org.apache.ZooKeeperService:name0=StandaloneServer_port11222,name1=Connections,name2=127.0.0.1,name3=0x10002e2d1500001] 2016-07-21 04:29:28,746 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@513] - EventThread shut down for session: 0x10002e2d1500001 2016-07-21 04:29:28,746 [myid:] - INFO [NIOWorkerThread-8:NIOServerCnxn@607] - Closed socket connection for client /127.0.0.1:48693 which had sessionid 0x10002e2d1500001 2016-07-21 04:29:28,746 [myid:] - INFO [main:ZooKeeper@1313] - Session: 0x10002e2d1500001 closed 2016-07-21 04:29:28,748 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@98] - TEST METHOD FAILED testLoadFailure org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /data- at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1412) at org.apache.zookeeper.test.LoadFromLogTest.testLoadFailure(LoadFromLogTest.java:157) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-21 04:29:28,750 [myid:] - INFO [main:ZKTestCase$1@70] - FAILED testLoadFailure org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /data- at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1412) at org.apache.zookeeper.test.LoadFromLogTest.testLoadFailure(LoadFromLogTest.java:157) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-21 04:29:29,830 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testLoadFailure {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 28 weeks ago |
Reviewed
|
0|i31cjz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2483 | Flaky Test: org.apache.zookeeper.test.LETest.testLE |
Test | Closed | Major | Duplicate | Michael Han | Michael Han | Michael Han | 21/Jul/16 17:45 | 17/May/17 23:44 | 27/Jul/16 17:40 | 3.4.8, 3.5.2 | 3.5.3 | server, tests | 0 | 2 | ZOOKEEPER-2135, ZOOKEEPER-1932 | From https://builds.apache.org/job/ZooKeeper_branch34/1587/ {noformat} Error Message Threads didn't join Stacktrace junit.framework.AssertionFailedError: Threads didn't join at org.apache.zookeeper.test.LETest.testLE(LETest.java:120) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) Standard Output 2016-07-21 06:02:29,408 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testLE 2016-07-21 06:02:29,413 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testLE 2016-07-21 06:02:29,417 [myid:] - INFO [main:PortAssignment@32] - assigning port 11221 2016-07-21 06:02:29,434 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,450 [myid:] - INFO [main:PortAssignment@32] - assigning port 11222 2016-07-21 06:02:29,450 [myid:] - INFO [main:PortAssignment@32] - assigning port 11223 2016-07-21 06:02:29,450 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,450 [myid:] - INFO [main:PortAssignment@32] - assigning port 11224 2016-07-21 06:02:29,451 [myid:] - INFO [main:PortAssignment@32] - assigning port 11225 2016-07-21 06:02:29,451 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,451 [myid:] - INFO [main:PortAssignment@32] - assigning port 11226 2016-07-21 06:02:29,451 [myid:] - INFO [main:PortAssignment@32] - assigning port 11227 2016-07-21 06:02:29,452 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,452 [myid:] - INFO [main:PortAssignment@32] - assigning port 11228 2016-07-21 06:02:29,452 [myid:] - INFO [main:PortAssignment@32] - assigning port 11229 2016-07-21 06:02:29,452 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,455 [myid:] - INFO [main:PortAssignment@32] - assigning port 11230 2016-07-21 06:02:29,455 [myid:] - INFO [main:PortAssignment@32] - assigning port 11231 2016-07-21 06:02:29,456 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,456 [myid:] - INFO [main:PortAssignment@32] - assigning port 11232 2016-07-21 06:02:29,456 [myid:] - INFO [main:PortAssignment@32] - assigning port 11233 2016-07-21 06:02:29,456 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,457 [myid:] - INFO [main:PortAssignment@32] - assigning port 11234 2016-07-21 06:02:29,457 [myid:] - INFO [main:PortAssignment@32] - assigning port 11235 2016-07-21 06:02:29,457 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,458 [myid:] - INFO [main:PortAssignment@32] - assigning port 11236 2016-07-21 06:02:29,458 [myid:] - INFO [main:PortAssignment@32] - assigning port 11237 2016-07-21 06:02:29,458 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,458 [myid:] - INFO [main:PortAssignment@32] - assigning port 11238 2016-07-21 06:02:29,459 [myid:] - INFO [main:PortAssignment@32] - assigning port 11239 2016-07-21 06:02:29,459 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,459 [myid:] - INFO [main:PortAssignment@32] - assigning port 11240 2016-07-21 06:02:29,459 [myid:] - INFO [main:PortAssignment@32] - assigning port 11241 2016-07-21 06:02:29,460 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,460 [myid:] - INFO [main:PortAssignment@32] - assigning port 11242 2016-07-21 06:02:29,460 [myid:] - INFO [main:PortAssignment@32] - assigning port 11243 2016-07-21 06:02:29,460 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,461 [myid:] - INFO [main:PortAssignment@32] - assigning port 11244 2016-07-21 06:02:29,461 [myid:] - INFO [main:PortAssignment@32] - assigning port 11245 2016-07-21 06:02:29,461 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,461 [myid:] - INFO [main:PortAssignment@32] - assigning port 11246 2016-07-21 06:02:29,462 [myid:] - INFO [main:PortAssignment@32] - assigning port 11247 2016-07-21 06:02:29,462 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,462 [myid:] - INFO [main:PortAssignment@32] - assigning port 11248 2016-07-21 06:02:29,462 [myid:] - INFO [main:PortAssignment@32] - assigning port 11249 2016-07-21 06:02:29,462 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,463 [myid:] - INFO [main:PortAssignment@32] - assigning port 11250 2016-07-21 06:02:29,463 [myid:] - INFO [main:PortAssignment@32] - assigning port 11251 2016-07-21 06:02:29,463 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,464 [myid:] - INFO [main:PortAssignment@32] - assigning port 11252 2016-07-21 06:02:29,464 [myid:] - INFO [main:PortAssignment@32] - assigning port 11253 2016-07-21 06:02:29,464 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,464 [myid:] - INFO [main:PortAssignment@32] - assigning port 11254 2016-07-21 06:02:29,464 [myid:] - INFO [main:PortAssignment@32] - assigning port 11255 2016-07-21 06:02:29,465 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,465 [myid:] - INFO [main:PortAssignment@32] - assigning port 11256 2016-07-21 06:02:29,465 [myid:] - INFO [main:PortAssignment@32] - assigning port 11257 2016-07-21 06:02:29,465 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,466 [myid:] - INFO [main:PortAssignment@32] - assigning port 11258 2016-07-21 06:02:29,466 [myid:] - INFO [main:PortAssignment@32] - assigning port 11259 2016-07-21 06:02:29,466 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,466 [myid:] - INFO [main:PortAssignment@32] - assigning port 11260 2016-07-21 06:02:29,467 [myid:] - INFO [main:PortAssignment@32] - assigning port 11261 2016-07-21 06:02:29,467 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,468 [myid:] - INFO [main:PortAssignment@32] - assigning port 11262 2016-07-21 06:02:29,468 [myid:] - INFO [main:PortAssignment@32] - assigning port 11263 2016-07-21 06:02:29,468 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,469 [myid:] - INFO [main:PortAssignment@32] - assigning port 11264 2016-07-21 06:02:29,469 [myid:] - INFO [main:PortAssignment@32] - assigning port 11265 2016-07-21 06:02:29,469 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,470 [myid:] - INFO [main:PortAssignment@32] - assigning port 11266 2016-07-21 06:02:29,470 [myid:] - INFO [main:PortAssignment@32] - assigning port 11267 2016-07-21 06:02:29,470 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,471 [myid:] - INFO [main:PortAssignment@32] - assigning port 11268 2016-07-21 06:02:29,471 [myid:] - INFO [main:PortAssignment@32] - assigning port 11269 2016-07-21 06:02:29,471 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,472 [myid:] - INFO [main:PortAssignment@32] - assigning port 11270 2016-07-21 06:02:29,472 [myid:] - INFO [main:PortAssignment@32] - assigning port 11271 2016-07-21 06:02:29,472 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,473 [myid:] - INFO [main:PortAssignment@32] - assigning port 11272 2016-07-21 06:02:29,473 [myid:] - INFO [main:PortAssignment@32] - assigning port 11273 2016-07-21 06:02:29,473 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,474 [myid:] - INFO [main:PortAssignment@32] - assigning port 11274 2016-07-21 06:02:29,474 [myid:] - INFO [main:PortAssignment@32] - assigning port 11275 2016-07-21 06:02:29,474 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,474 [myid:] - INFO [main:PortAssignment@32] - assigning port 11276 2016-07-21 06:02:29,475 [myid:] - INFO [main:PortAssignment@32] - assigning port 11277 2016-07-21 06:02:29,475 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,475 [myid:] - INFO [main:PortAssignment@32] - assigning port 11278 2016-07-21 06:02:29,475 [myid:] - INFO [main:PortAssignment@32] - assigning port 11279 2016-07-21 06:02:29,476 [myid:] - INFO [main:QuorumPeer$QuorumServer@149] - Resolved hostname: 127.0.0.1 to address: /127.0.0.1 2016-07-21 06:02:29,476 [myid:] - INFO [main:PortAssignment@32] - assigning port 11280 2016-07-21 06:02:29,505 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11222 2016-07-21 06:02:29,530 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,534 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,543 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11224 2016-07-21 06:02:29,545 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,546 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,549 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11226 2016-07-21 06:02:29,550 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,551 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,553 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11228 2016-07-21 06:02:29,554 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,555 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,558 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11230 2016-07-21 06:02:29,558 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,559 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,562 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11232 2016-07-21 06:02:29,563 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,564 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,566 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11234 2016-07-21 06:02:29,567 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,568 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,571 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11236 2016-07-21 06:02:29,572 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,573 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,575 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11238 2016-07-21 06:02:29,576 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,577 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,580 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11240 2016-07-21 06:02:29,581 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,582 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,585 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11242 2016-07-21 06:02:29,586 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,587 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,589 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11244 2016-07-21 06:02:29,590 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,592 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,594 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11246 2016-07-21 06:02:29,595 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,597 [myid:] - WARN [Thread-11:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,597 [myid:] - WARN [Thread-9:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,595 [myid:] - WARN [Thread-12:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,596 [myid:] - WARN [Thread-10:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,597 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,598 [myid:] - WARN [Thread-10:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,600 [myid:] - WARN [Thread-3:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,600 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,600 [myid:] - WARN [Thread-5:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,601 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,601 [myid:] - WARN [Thread-1:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,601 [myid:] - WARN [Thread-1:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,600 [myid:] - WARN [Thread-4:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,602 [myid:] - WARN [Thread-4:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,599 [myid:] - WARN [Thread-7:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,599 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,599 [myid:] - WARN [Thread-6:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,604 [myid:] - WARN [Thread-6:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,599 [myid:] - WARN [Thread-8:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,604 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,598 [myid:] - WARN [Thread-12:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,598 [myid:] - WARN [Thread-11:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,598 [myid:] - WARN [Thread-9:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,606 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,606 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,606 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,606 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,607 [myid:] - WARN [Thread-13:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,607 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,605 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,607 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,605 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,604 [myid:] - WARN [Thread-8:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,603 [myid:] - WARN [Thread-7:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,603 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,601 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,601 [myid:] - WARN [Thread-5:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,601 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,601 [myid:] - WARN [Thread-3:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,610 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,610 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,611 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,611 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,609 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,609 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,609 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,609 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,608 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,608 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,613 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,613 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,607 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,607 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11248 2016-07-21 06:02:29,607 [myid:] - WARN [Thread-13:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,607 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,606 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,615 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,614 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,613 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,615 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,613 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,613 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,612 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,612 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,612 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,612 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,617 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,617 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,612 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,617 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,612 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,618 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,618 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,618 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,617 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,618 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,617 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,619 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,616 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,619 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,616 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,616 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,616 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,619 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,620 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,620 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,616 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,615 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,620 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,615 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,615 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,615 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,621 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,621 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,622 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,621 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,622 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,621 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,622 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,620 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,620 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,623 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,619 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,623 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,619 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,623 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,624 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,624 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,624 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,624 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,624 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,619 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,619 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,618 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,625 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,618 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,625 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:29,618 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:29,627 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:29,627 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:29,627 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:29,627 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,626 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,625 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,625 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,625 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,624 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,628 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,624 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:29,623 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,629 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:29,629 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,629 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:29,623 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,630 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,630 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,623 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:29,630 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,630 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,622 [myid:] - INFO [main:QuorumPeer@548] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,622 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,622 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,630 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,629 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,631 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:29,629 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:29,632 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:29,628 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,632 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:29,632 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,628 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,628 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,628 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,633 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:29,628 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:29,633 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:29,633 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,633 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,633 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:29,633 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,633 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,632 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:29,631 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,634 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:29,631 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:29,635 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:29,635 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:29,631 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,635 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:29,631 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:29,635 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:29,635 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,635 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:29,635 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:29,634 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:29,634 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,634 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:29,634 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,634 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:11250 2016-07-21 06:02:29,634 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,633 [myid:] - WARN [Thread-14:MBeanRegistry@104] - Failed to register MBean LeaderElection 2016-07-21 06:02:29,637 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,637 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,637 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:29,636 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,636 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:29,636 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:29,638 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:29,636 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,638 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:29,638 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,636 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:29,638 [myid:] - INFO [Thread-12:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:29,638 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,637 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:29,637 [myid:] - INFO [main:QuorumPeer@533] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 06:02:29,637 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:29,637 [myid:] - WARN [Thread-14:LeaderElection@152] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:483) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:100) at org.apache.zookeeper.server.quorum.LeaderElection.lookForLeader(LeaderElection.java:149) at org.apache.zookeeper.test.LETest$LEThread.run(LETest.java:57) 2016-07-21 06:02:29,639 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:29,640 [myid:] - INFO [Thread-14:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:29,640 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,639 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:29,639 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,639 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:29,639 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:29,641 [myid:] - INFO [Thread-11:LeaderElection@187] - Server add ...[truncated 187743 chars]... > 1 2016-07-21 06:02:36,006 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:36,006 [myid:] - INFO [Thread-18:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:36,008 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:36,006 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:35,931 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:35,930 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:35,930 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:35,930 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:35,930 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,008 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:35,930 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:35,930 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:35,929 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:35,929 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:35,929 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:36,009 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:35,927 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:35,927 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,010 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:35,927 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:35,926 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:35,926 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:35,926 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:35,926 [myid:] - INFO [Thread-26:LeaderElection@130] - 2 -> 1 2016-07-21 06:02:35,925 [myid:] - INFO [Thread-16:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:36,011 [myid:] - INFO [Thread-16:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:36,011 [myid:] - INFO [Thread-16:LeaderElection@130] - 23 -> 1 2016-07-21 06:02:35,925 [myid:] - INFO [Thread-20:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:36,012 [myid:] - INFO [Thread-20:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:36,012 [myid:] - INFO [Thread-20:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:35,924 [myid:] - INFO [Thread-25:LeaderElection@130] - 9 -> 1 2016-07-21 06:02:35,924 [myid:] - INFO [Thread-21:LeaderElection@130] - 9 -> 1 2016-07-21 06:02:36,012 [myid:] - INFO [Thread-21:LeaderElection@130] - 10 -> 1 2016-07-21 06:02:36,012 [myid:] - INFO [Thread-21:LeaderElection@130] - 0 -> 1 2016-07-21 06:02:36,012 [myid:] - INFO [Thread-21:LeaderElection@130] - 4 -> 1 2016-07-21 06:02:35,924 [myid:] - INFO [Thread-23:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:36,012 [myid:] - INFO [Thread-23:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-23:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-23:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-23:LeaderElection@130] - 23 -> 1 2016-07-21 06:02:35,924 [myid:] - INFO [Thread-19:LeaderElection@130] - 3 -> 1 2016-07-21 06:02:35,924 [myid:] - INFO [Thread-12:LeaderElection@130] - 20 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-19:LeaderElection@130] - 2 -> 1 2016-07-21 06:02:36,012 [myid:] - INFO [Thread-21:LeaderElection@130] - 3 -> 1 2016-07-21 06:02:36,012 [myid:] - INFO [Thread-25:LeaderElection@130] - 10 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-25:LeaderElection@130] - 0 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-25:LeaderElection@130] - 4 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-25:LeaderElection@130] - 3 -> 1 2016-07-21 06:02:36,014 [myid:] - INFO [Thread-25:LeaderElection@130] - 2 -> 1 2016-07-21 06:02:36,014 [myid:] - INFO [Thread-25:LeaderElection@130] - 1 -> 1 2016-07-21 06:02:36,014 [myid:] - INFO [Thread-25:LeaderElection@130] - 7 -> 1 2016-07-21 06:02:36,014 [myid:] - INFO [Thread-25:LeaderElection@130] - 18 -> 1 2016-07-21 06:02:36,014 [myid:] - INFO [Thread-25:LeaderElection@130] - 17 -> 1 2016-07-21 06:02:36,014 [myid:] - INFO [Thread-25:LeaderElection@130] - 25 -> 1 2016-07-21 06:02:36,014 [myid:] - INFO [Thread-25:LeaderElection@130] - 20 -> 1 2016-07-21 06:02:36,011 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,011 [myid:] - INFO [Thread-26:LeaderElection@130] - 1 -> 1 2016-07-21 06:02:36,011 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:36,011 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:36,011 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:36,010 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,015 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:36,010 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:36,010 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:36,010 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:36,009 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,009 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:36,009 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:36,009 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:36,009 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,017 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:36,008 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:36,008 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:36,008 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:36,008 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:36,008 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:36,008 [myid:] - INFO [Thread-18:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:36,051 [myid:] - INFO [Thread-18:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:36,007 [myid:] - INFO [Thread-24:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:36,051 [myid:] - INFO [Thread-24:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:36,051 [myid:] - INFO [Thread-24:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:36,007 [myid:] - INFO [Thread-14:LeaderElection@130] - 4 -> 1 2016-07-21 06:02:36,051 [myid:] - INFO [Thread-14:LeaderElection@130] - 3 -> 1 2016-07-21 06:02:36,051 [myid:] - INFO [Thread-14:LeaderElection@130] - 2 -> 1 2016-07-21 06:02:36,051 [myid:] - INFO [Thread-14:LeaderElection@130] - 1 -> 1 2016-07-21 06:02:36,052 [myid:] - INFO [Thread-14:LeaderElection@130] - 7 -> 1 2016-07-21 06:02:36,052 [myid:] - INFO [Thread-14:LeaderElection@130] - 18 -> 1 2016-07-21 06:02:36,052 [myid:] - INFO [Thread-14:LeaderElection@130] - 17 -> 1 2016-07-21 06:02:36,052 [myid:] - INFO [Thread-14:LeaderElection@130] - 25 -> 1 2016-07-21 06:02:36,007 [myid:] - INFO [Thread-22:LeaderElection@130] - 6 -> 1 2016-07-21 06:02:36,052 [myid:] - INFO [Thread-22:LeaderElection@130] - 11 -> 1 2016-07-21 06:02:36,052 [myid:] - INFO [Thread-14:LeaderElection@130] - 20 -> 1 2016-07-21 06:02:36,051 [myid:] - INFO [Thread-18:LeaderElection@130] - 23 -> 1 2016-07-21 06:02:36,051 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:36,051 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:36,050 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:36,050 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:36,050 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:36,017 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,017 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:36,016 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,016 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:36,016 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,016 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:36,016 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:36,016 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:36,054 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:36,015 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:36,054 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:36,015 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:36,015 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:36,015 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:36,015 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:36,014 [myid:] - INFO [Thread-26:LeaderElection@130] - 26 -> 1 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-26:LeaderElection@130] - 18 -> 1 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-26:LeaderElection@130] - 17 -> 1 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-26:LeaderElection@130] - 25 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-26:LeaderElection@130] - 20 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-26:LeaderElection@130] - 27 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-26:LeaderElection@130] - 19 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-26:LeaderElection@130] - 14 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-26:LeaderElection@130] - 22 -> 1 2016-07-21 06:02:36,014 [myid:] - INFO [Thread-25:LeaderElection@130] - 19 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-21:LeaderElection@130] - 2 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-19:LeaderElection@130] - 1 -> 1 2016-07-21 06:02:36,013 [myid:] - INFO [Thread-12:LeaderElection@130] - 19 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-19:LeaderElection@130] - 7 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-21:LeaderElection@130] - 1 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-25:LeaderElection@130] - 14 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-26:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11265 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,057 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,057 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:36,057 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,058 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:36,055 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,054 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:36,054 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:36,058 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:36,054 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:36,054 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:36,054 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,054 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:36,053 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:36,052 [myid:] - INFO [Thread-14:LeaderElection@130] - 19 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-14:LeaderElection@130] - 14 -> 1 2016-07-21 06:02:36,052 [myid:] - INFO [Thread-22:LeaderElection@130] - 12 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-22:LeaderElection@130] - 9 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-22:LeaderElection@130] - 10 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-22:LeaderElection@130] - 0 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-22:LeaderElection@130] - 4 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-22:LeaderElection@130] - 3 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-22:LeaderElection@130] - 2 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-22:LeaderElection@130] - 1 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-14:LeaderElection@130] - 22 -> 1 2016-07-21 06:02:36,061 [myid:] - INFO [Thread-14:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:36,061 [myid:] - INFO [Thread-14:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:36,061 [myid:] - INFO [Thread-14:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:36,061 [myid:] - INFO [Thread-14:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:36,060 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:36,061 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:36,062 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:36,062 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:36,059 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:36,062 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,058 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,058 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:36,058 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:36,058 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,058 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,058 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:36,058 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:36,057 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:36,057 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11271 2016-07-21 06:02:36,057 [myid:] - INFO [Thread-26:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:36,057 [myid:] - INFO [Thread-25:LeaderElection@130] - 22 -> 1 2016-07-21 06:02:36,057 [myid:] - INFO [Thread-21:LeaderElection@130] - 7 -> 1 2016-07-21 06:02:36,064 [myid:] - INFO [Thread-21:LeaderElection@130] - 18 -> 1 2016-07-21 06:02:36,064 [myid:] - INFO [Thread-21:LeaderElection@130] - 17 -> 1 2016-07-21 06:02:36,065 [myid:] - INFO [Thread-21:LeaderElection@130] - 25 -> 1 2016-07-21 06:02:36,065 [myid:] - INFO [Thread-21:LeaderElection@130] - 20 -> 1 2016-07-21 06:02:36,065 [myid:] - INFO [Thread-21:LeaderElection@130] - 19 -> 1 2016-07-21 06:02:36,065 [myid:] - INFO [Thread-21:LeaderElection@130] - 14 -> 1 2016-07-21 06:02:36,065 [myid:] - INFO [Thread-21:LeaderElection@130] - 22 -> 1 2016-07-21 06:02:36,065 [myid:] - INFO [Thread-21:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:36,065 [myid:] - INFO [Thread-21:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:36,065 [myid:] - INFO [Thread-21:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:36,066 [myid:] - INFO [Thread-21:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:36,057 [myid:] - INFO [Thread-19:LeaderElection@130] - 18 -> 1 2016-07-21 06:02:36,056 [myid:] - INFO [Thread-12:LeaderElection@130] - 22 -> 1 2016-07-21 06:02:36,066 [myid:] - INFO [Thread-19:LeaderElection@130] - 17 -> 1 2016-07-21 06:02:36,066 [myid:] - INFO [Thread-19:LeaderElection@130] - 25 -> 1 2016-07-21 06:02:36,066 [myid:] - INFO [Thread-19:LeaderElection@130] - 20 -> 1 2016-07-21 06:02:36,066 [myid:] - INFO [Thread-19:LeaderElection@130] - 19 -> 1 2016-07-21 06:02:36,064 [myid:] - INFO [Thread-25:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:36,066 [myid:] - INFO [Thread-25:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:36,066 [myid:] - INFO [Thread-25:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:36,067 [myid:] - INFO [Thread-25:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:36,064 [myid:] - INFO [Thread-26:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:36,064 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11269 2016-07-21 06:02:36,064 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,064 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,064 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:36,068 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:36,064 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:36,068 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:36,068 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11265 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:40,547 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11271 2016-07-21 06:02:36,063 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,062 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:40,547 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:40,548 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:36,062 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:36,062 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:36,062 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:40,548 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:36,062 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:36,061 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:36,061 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:40,549 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:40,549 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:36,061 [myid:] - INFO [Thread-22:LeaderElection@130] - 7 -> 1 2016-07-21 06:02:40,549 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:40,550 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:40,549 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:40,550 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:40,550 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:40,550 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:40,550 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:40,550 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:40,550 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:40,550 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:40,551 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:40,551 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:40,549 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:40,551 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:40,551 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:40,549 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:40,548 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:40,551 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:40,548 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:40,552 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:40,548 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11265 2016-07-21 06:02:40,548 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:40,552 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11271 2016-07-21 06:02:40,552 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:40,552 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:40,547 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11269 2016-07-21 06:02:40,547 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:40,547 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:37,067 [myid:] - INFO [Thread-26:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:37,067 [myid:] - INFO [Thread-25:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:37,066 [myid:] - INFO [Thread-21:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:37,061 [myid:] - INFO [Thread-14:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:37,053 [myid:] - INFO [Thread-18:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:40,554 [myid:] - INFO [Thread-14:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:40,554 [myid:] - INFO [Thread-18:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:40,554 [myid:] - INFO [Thread-14:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:40,554 [myid:] - INFO [Thread-18:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:40,554 [myid:] - INFO [Thread-18:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:40,555 [myid:] - INFO [Thread-18:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:37,051 [myid:] - INFO [Thread-24:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:37,013 [myid:] - INFO [Thread-23:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:37,012 [myid:] - INFO [Thread-20:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:40,555 [myid:] - INFO [Thread-23:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:37,012 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11221 2016-07-21 06:02:36,069 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:40,556 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:40,556 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:40,556 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:40,556 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11265 2016-07-21 06:02:40,556 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:40,556 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11271 2016-07-21 06:02:40,556 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:40,557 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11269 2016-07-21 06:02:40,557 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:40,557 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11275 2016-07-21 06:02:40,557 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:40,557 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11273 2016-07-21 06:02:36,069 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:36,069 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:36,068 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:36,068 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:36,068 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:45,396 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:36,068 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11275 2016-07-21 06:02:36,066 [myid:] - INFO [Thread-19:LeaderElection@130] - 14 -> 1 2016-07-21 06:02:45,397 [myid:] - INFO [Thread-19:LeaderElection@130] - 22 -> 1 2016-07-21 06:02:45,397 [myid:] - INFO [Thread-19:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:45,397 [myid:] - INFO [Thread-19:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:36,066 [myid:] - INFO [Thread-12:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:45,397 [myid:] - INFO [Thread-12:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:45,397 [myid:] - INFO [Thread-12:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:45,398 [myid:] - INFO [Thread-12:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:45,398 [myid:] - INFO [Thread-12:LeaderElection@130] - 23 -> 1 2016-07-21 06:02:45,397 [myid:] - INFO [Thread-19:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:45,398 [myid:] - INFO [Thread-19:LeaderElection@130] - 23 -> 1 2016-07-21 06:02:45,397 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11273 2016-07-21 06:02:45,396 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:45,396 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:45,399 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:45,396 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:45,399 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:45,399 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:44,735 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@74] - TEST METHOD FAILED testLE java.lang.AssertionError: Threads didn't join at org.junit.Assert.fail(Assert.java:91) at org.apache.zookeeper.test.LETest.testLE(LETest.java:120) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:532) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1179) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1030) 2016-07-21 06:02:40,558 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:40,558 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:40,557 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11279 2016-07-21 06:02:40,557 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:45,401 [myid:] - INFO [Thread-11:LeaderElection@187] - Server address: /127.0.0.1:11277 2016-07-21 06:02:40,555 [myid:] - INFO [Thread-23:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:40,555 [myid:] - INFO [Thread-20:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:40,555 [myid:] - INFO [Thread-24:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:40,555 [myid:] - INFO [Thread-14:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:45,402 [myid:] - INFO [Thread-24:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:40,555 [myid:] - INFO [Thread-18:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:40,554 [myid:] - INFO [Thread-21:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:40,554 [myid:] - INFO [Thread-25:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:45,403 [myid:] - INFO [Thread-21:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:40,553 [myid:] - INFO [Thread-26:LeaderElection@187] - Server address: /127.0.0.1:11223 2016-07-21 06:02:40,553 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:45,403 [myid:] - INFO [Thread-26:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:45,403 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:40,553 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:40,553 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11275 2016-07-21 06:02:45,404 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:40,553 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:45,404 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:40,552 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11269 2016-07-21 06:02:40,552 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:45,405 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11275 2016-07-21 06:02:40,552 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:40,552 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:40,551 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:40,551 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:40,551 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:45,406 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:45,406 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:45,406 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:45,406 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:40,549 [myid:] - INFO [Thread-22:LeaderElection@130] - 18 -> 1 2016-07-21 06:02:45,407 [myid:] - INFO [Thread-22:LeaderElection@130] - 17 -> 1 2016-07-21 06:02:45,407 [myid:] - INFO [Thread-22:LeaderElection@130] - 20 -> 1 2016-07-21 06:02:45,407 [myid:] - INFO [Thread-22:LeaderElection@130] - 19 -> 1 2016-07-21 06:02:45,407 [myid:] - INFO [Thread-22:LeaderElection@130] - 14 -> 1 2016-07-21 06:02:45,407 [myid:] - INFO [Thread-22:LeaderElection@130] - 22 -> 1 2016-07-21 06:02:45,407 [myid:] - INFO [Thread-22:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:45,407 [myid:] - INFO [Thread-22:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:45,407 [myid:] - INFO [Thread-22:LeaderElection@130] - 24 -> 1 2016-07-21 06:02:45,407 [myid:] - INFO [Thread-22:LeaderElection@130] - 16 -> 1 2016-07-21 06:02:45,408 [myid:] - INFO [Thread-22:LeaderElection@130] - 23 -> 1 2016-07-21 06:02:45,406 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:45,406 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:45,406 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11241 2016-07-21 06:02:45,405 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:45,405 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:45,408 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11243 2016-07-21 06:02:45,409 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:45,409 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11245 2016-07-21 06:02:45,409 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:45,405 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11273 2016-07-21 06:02:45,409 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:45,405 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:45,404 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:45,409 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:45,410 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:45,404 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:45,410 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:45,410 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11257 2016-07-21 06:02:45,404 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11273 2016-07-21 06:02:45,404 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:45,403 [myid:] - INFO [Thread-26:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:45,403 [myid:] - INFO [Thread-21:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:45,411 [myid:] - INFO [Thread-26:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:45,403 [myid:] - INFO [Thread-25:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:45,402 [myid:] - INFO [Thread-18:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:45,402 [myid:] - INFO [Thread-24:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:45,402 [myid:] - INFO [Thread-14:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:45,401 [myid:] - INFO [Thread-20:LeaderElection@187] - Server address: /127.0.0.1:11225 2016-07-21 06:02:45,412 [myid:] - INFO [Thread-14:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:45,401 [myid:] - INFO [Thread-11:LeaderElection@124] - Election tally: 2016-07-21 06:02:45,412 [myid:] - INFO [Thread-14:LeaderElection@187] - Server address: /127.0.0.1:11233 2016-07-21 06:02:45,401 [myid:] - INFO [Thread-23:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:45,401 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11237 2016-07-21 06:02:45,401 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11255 2016-07-21 06:02:45,400 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11265 2016-07-21 06:02:45,400 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testLE java.lang.AssertionError: Threads didn't join at org.junit.Assert.fail(Assert.java:91) at org.apache.zookeeper.test.LETest.testLE(LETest.java:120) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:532) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1179) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1030) 2016-07-21 06:02:45,399 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:45,399 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11265 2016-07-21 06:02:45,399 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:45,399 [myid:] - INFO [Thread-27:LeaderElection@187] - Server address: /127.0.0.1:11279 2016-07-21 06:02:45,418 [myid:] - INFO [Thread-30:LeaderElection@187] - Server address: /127.0.0.1:11259 2016-07-21 06:02:45,418 [myid:] - INFO [Thread-6:LeaderElection@187] - Server address: /127.0.0.1:11271 2016-07-21 06:02:45,417 [myid:] - INFO [Thread-28:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:45,417 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testLE 2016-07-21 06:02:45,413 [myid:] - INFO [Thread-29:LeaderElection@187] - Server address: /127.0.0.1:11271 2016-07-21 06:02:45,413 [myid:] - INFO [Thread-8:LeaderElection@187] - Server address: /127.0.0.1:11253 2016-07-21 06:02:45,413 [myid:] - INFO [Thread-16:LeaderElection@187] - Server address: /127.0.0.1:11239 2016-07-21 06:02:45,413 [myid:] - INFO [Thread-23:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:45,413 [myid:] - INFO [Thread-14:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:45,412 [myid:] - INFO [Thread-11:LeaderElection@130] - 15 -> 1 2016-07-21 06:02:45,420 [myid:] - INFO [Thread-11:LeaderElection@130] - 13 -> 1 2016-07-21 06:02:45,420 [myid:] - INFO [Thread-11:LeaderElection@130] - 21 -> 1 2016-07-21 06:02:45,420 [myid:] - INFO [Thread-11:LeaderElection@130] - 27 -> 1 2016-07-21 06:02:45,420 [myid:] - INFO [Thread-11:LeaderElection@130] - 17 -> 1 2016-07-21 06:02:45,420 [myid:] - INFO [Thread-11:LeaderElection@130] - 18 -> 1 2016-07-21 06:02:45,420 [myid:] - INFO [Thread-11:LeaderElection@130] - 11 -> 1 2016-07-21 06:02:45,420 [myid:] - INFO [Thread-11:LeaderElection@130] - 29 -> 23 2016-07-21 06:02:45,412 [myid:] - INFO [Thread-20:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:45,412 [myid:] - INFO [Thread-24:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:45,412 [myid:] - INFO [Thread-18:LeaderElection@187] - Server address: /127.0.0.1:11235 2016-07-21 06:02:45,411 [myid:] - INFO [Thread-25:LeaderElection@187] - Server address: /127.0.0.1:11227 2016-07-21 06:02:45,411 [myid:] - INFO [Thread-26:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:45,421 [myid:] - INFO [Thread-25:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:45,411 [myid:] - INFO [Thread-21:LeaderElection@187] - Server address: /127.0.0.1:11229 2016-07-21 06:02:45,411 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:45,422 [myid:] - INFO [Thread-21:LeaderElection@187] - Server address: /127.0.0.1:11231 2016-07-21 06:02:45,422 [myid:] - INFO [Thread-5:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:45,411 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11279 2016-07-21 06:02:45,410 [myid:] - INFO [Thread-17:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:45,422 [myid:] - INFO [Thread-4:LeaderElection@187] - Server address: /127.0.0.1:11277 2016-07-21 06:02:45,410 [myid:] - INFO [Thread-1:LeaderElection@187] - Server address: /127.0.0.1:11265 2016-07-21 06:02:45,410 [myid:] - INFO [Thread-15:LeaderElection@187] - Server address: /127.0.0.1:11251 2016-07-21 06:02:45,409 [myid:] - INFO [Thread-3:LeaderElection@187] - Server address: /127.0.0.1:11267 2016-07-21 06:02:45,409 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11279 2016-07-21 06:02:45,409 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11247 2016-07-21 06:02:45,408 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11263 2016-07-21 06:02:45,408 [myid:] - INFO [Thread-9:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:45,408 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11265 2016-07-21 06:02:45,424 [myid:] - INFO [Thread-7:LeaderElection@187] - Server address: /127.0.0.1:11261 2016-07-21 06:02:45,424 [myid:] - INFO [Thread-13:LeaderElection@187] - Server address: /127.0.0.1:11271 2016-07-21 06:02:45,424 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11249 2016-07-21 06:02:45,423 [myid:] - INFO [Thread-0:LeaderElection@187] - Server address: /127.0.0.1:11277 2016-07-21 06:02:45,425 [myid:] - INFO [Thread-10:LeaderElection@187] - Server address: /127.0.0.1:11251 {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 34 weeks, 1 day ago | 0|i31cjr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2482 | Flaky Test: org.apache.zookeeper.test.ClientPortBindTest.testBindByAddress |
Test | Closed | Major | Fixed | Michael Han | Michael Han | Michael Han | 21/Jul/16 17:40 | 17/May/17 23:43 | 15/Aug/16 18:46 | 3.5.2 | 3.5.3 | server, tests | 0 | 1 | ZOOKEEPER-2135, ZOOKEEPER-1256 | From https://builds.apache.org/job/ZooKeeper_branch34/1587/ {noformat} Error Message No such device Stacktrace java.net.SocketException: No such device at java.net.NetworkInterface.isLoopback0(Native Method) at java.net.NetworkInterface.isLoopback(NetworkInterface.java:339) at org.apache.zookeeper.test.ClientPortBindTest.testBindByAddress(ClientPortBindTest.java:61) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) Standard Output 2016-07-21 05:58:50,388 [myid:] - INFO [main:ZKTestCase$1@50] - STARTING testBindByAddress 2016-07-21 05:58:50,393 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@53] - RUNNING TEST METHOD testBindByAddress 2016-07-21 05:58:50,405 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@74] - TEST METHOD FAILED testBindByAddress java.net.SocketException: No such device at java.net.NetworkInterface.isLoopback0(Native Method) at java.net.NetworkInterface.isLoopback(NetworkInterface.java:339) at org.apache.zookeeper.test.ClientPortBindTest.testBindByAddress(ClientPortBindTest.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:532) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1179) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1030) 2016-07-21 05:58:50,408 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testBindByAddress java.net.SocketException: No such device at java.net.NetworkInterface.isLoopback0(Native Method) at java.net.NetworkInterface.isLoopback(NetworkInterface.java:339) at org.apache.zookeeper.test.ClientPortBindTest.testBindByAddress(ClientPortBindTest.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:55) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:532) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1179) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1030) 2016-07-21 05:58:50,409 [myid:] - INFO [main:ZKTestCase$1@55] - FINISHED testBindByAddress {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 31 weeks, 3 days ago | 0|i31cjb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2481 | ZOOKEEPER-3170 Flaky Test: testZeroWeightQuorum |
Sub-task | Closed | Major | Cannot Reproduce | Andor Molnar | Michael Han | Michael Han | 21/Jul/16 17:33 | 19/Dec/19 18:02 | 25/Oct/18 11:11 | 3.4.8, 3.5.2 | 3.5.5 | server, tests | 0 | 4 | ZOOKEEPER-2135 | See https://builds.apache.org/job/ZooKeeper-trunk-openjdk7/1098/ {noformat} Error Message Threads didn't join Stacktrace junit.framework.AssertionFailedError: Threads didn't join at org.apache.zookeeper.test.FLEZeroWeightTest.testZeroWeightQuorum(FLEZeroWeightTest.java:167) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) Standard Output 2016-07-21 04:24:14,065 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-07-21 04:24:14,158 [myid:] - INFO [main:JUnit4ZKTestRunner@47] - No test.method specified. using default methods. 2016-07-21 04:24:14,176 [myid:] - INFO [main:ZKTestCase$1@55] - STARTING testZeroWeightQuorum 2016-07-21 04:24:14,180 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@77] - RUNNING TEST METHOD testZeroWeightQuorum 2016-07-21 04:24:14,180 [myid:] - INFO [main:FLEZeroWeightTest@143] - TestZeroWeightQuorum: testZeroWeightQuorum, 9 2016-07-21 04:24:14,183 [myid:] - INFO [main:PortAssignment@157] - Single test process using ports from 11221 - 32767. 2016-07-21 04:24:14,187 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11222 from range 11221 - 32767. 2016-07-21 04:24:14,189 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11223 from range 11221 - 32767. 2016-07-21 04:24:14,189 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11224 from range 11221 - 32767. 2016-07-21 04:24:14,218 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11225 from range 11221 - 32767. 2016-07-21 04:24:14,218 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11226 from range 11221 - 32767. 2016-07-21 04:24:14,219 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11227 from range 11221 - 32767. 2016-07-21 04:24:14,219 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11228 from range 11221 - 32767. 2016-07-21 04:24:14,220 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11229 from range 11221 - 32767. 2016-07-21 04:24:14,220 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11230 from range 11221 - 32767. 2016-07-21 04:24:14,221 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11231 from range 11221 - 32767. 2016-07-21 04:24:14,224 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11232 from range 11221 - 32767. 2016-07-21 04:24:14,224 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11233 from range 11221 - 32767. 2016-07-21 04:24:14,225 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11234 from range 11221 - 32767. 2016-07-21 04:24:14,225 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11235 from range 11221 - 32767. 2016-07-21 04:24:14,225 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11236 from range 11221 - 32767. 2016-07-21 04:24:14,226 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11237 from range 11221 - 32767. 2016-07-21 04:24:14,226 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11238 from range 11221 - 32767. 2016-07-21 04:24:14,227 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11239 from range 11221 - 32767. 2016-07-21 04:24:14,227 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11240 from range 11221 - 32767. 2016-07-21 04:24:14,228 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11241 from range 11221 - 32767. 2016-07-21 04:24:14,228 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11242 from range 11221 - 32767. 2016-07-21 04:24:14,229 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11243 from range 11221 - 32767. 2016-07-21 04:24:14,229 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11244 from range 11221 - 32767. 2016-07-21 04:24:14,229 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11245 from range 11221 - 32767. 2016-07-21 04:24:14,230 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11246 from range 11221 - 32767. 2016-07-21 04:24:14,230 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11247 from range 11221 - 32767. 2016-07-21 04:24:14,231 [myid:] - INFO [main:PortAssignment@85] - Assigned port 11248 from range 11221 - 32767. 2016-07-21 04:24:14,235 [myid:] - INFO [main:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,262 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:24:14,273 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11224 2016-07-21 04:24:14,309 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,315 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,328 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11223 2016-07-21 04:24:14,329 [myid:] - INFO [main:FLEZeroWeightTest$LEThread@101] - Constructor: Thread-0 2016-07-21 04:24:14,329 [myid:] - INFO [Thread-0:FLEZeroWeightTest$LEThread@112] - Going to call leader election. 2016-07-21 04:24:14,330 [myid:] - INFO [main:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,331 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:24:14,331 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11227 2016-07-21 04:24:14,332 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,351 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,353 [myid:] - INFO [main:FLEZeroWeightTest$LEThread@101] - Constructor: Thread-2 2016-07-21 04:24:14,353 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11226 2016-07-21 04:24:14,354 [myid:] - INFO [Thread-2:FLEZeroWeightTest$LEThread@112] - Going to call leader election. 2016-07-21 04:24:14,355 [myid:] - INFO [main:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,355 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:24:14,356 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11230 2016-07-21 04:24:14,357 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,359 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,368 [myid:] - INFO [main:FLEZeroWeightTest$LEThread@101] - Constructor: Thread-3 2016-07-21 04:24:14,368 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11229 2016-07-21 04:24:14,369 [myid:] - INFO [Thread-3:FLEZeroWeightTest$LEThread@112] - Going to call leader election. 2016-07-21 04:24:14,370 [myid:] - INFO [main:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,370 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:24:14,371 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11233 2016-07-21 04:24:14,372 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,373 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,376 [myid:] - INFO [main:FLEZeroWeightTest$LEThread@101] - Constructor: Thread-4 2016-07-21 04:24:14,376 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11232 2016-07-21 04:24:14,376 [myid:] - INFO [Thread-4:FLEZeroWeightTest$LEThread@112] - Going to call leader election. 2016-07-21 04:24:14,377 [myid:] - INFO [main:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,378 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:24:14,378 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11236 2016-07-21 04:24:14,379 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,381 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,383 [myid:] - INFO [main:FLEZeroWeightTest$LEThread@101] - Constructor: Thread-5 2016-07-21 04:24:14,383 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11235 2016-07-21 04:24:14,383 [myid:] - INFO [Thread-5:FLEZeroWeightTest$LEThread@112] - Going to call leader election. 2016-07-21 04:24:14,384 [myid:] - INFO [main:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,385 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:24:14,385 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11239 2016-07-21 04:24:14,386 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,388 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,390 [myid:] - INFO [main:FLEZeroWeightTest$LEThread@101] - Constructor: Thread-6 2016-07-21 04:24:14,391 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11238 2016-07-21 04:24:14,391 [myid:] - INFO [Thread-6:FLEZeroWeightTest$LEThread@112] - Going to call leader election. 2016-07-21 04:24:14,392 [myid:] - INFO [main:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,392 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:24:14,392 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11242 2016-07-21 04:24:14,393 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,395 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,397 [myid:] - INFO [main:FLEZeroWeightTest$LEThread@101] - Constructor: Thread-7 2016-07-21 04:24:14,398 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11241 2016-07-21 04:24:14,398 [myid:] - INFO [Thread-7:FLEZeroWeightTest$LEThread@112] - Going to call leader election. 2016-07-21 04:24:14,399 [myid:] - INFO [main:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,399 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:24:14,400 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11245 2016-07-21 04:24:14,400 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,402 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,404 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11244 2016-07-21 04:24:14,405 [myid:] - INFO [main:FLEZeroWeightTest$LEThread@101] - Constructor: Thread-8 2016-07-21 04:24:14,405 [myid:] - INFO [Thread-8:FLEZeroWeightTest$LEThread@112] - Going to call leader election. 2016-07-21 04:24:14,406 [myid:] - INFO [main:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,407 [myid:] - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. 2016-07-21 04:24:14,407 [myid:] - INFO [main:NIOServerCnxnFactory@686] - binding to port /127.0.0.1:11248 2016-07-21 04:24:14,408 [myid:] - INFO [main:QuorumPeer@776] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,410 [myid:] - INFO [main:QuorumPeer@791] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation 2016-07-21 04:24:14,412 [myid:] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@632] - My election bind port: /127.0.0.1:11247 2016-07-21 04:24:14,412 [myid:] - INFO [main:FLEZeroWeightTest$LEThread@101] - Constructor: Thread-9 2016-07-21 04:24:14,412 [myid:] - INFO [main:FLEZeroWeightTest@162] - Started threads testZeroWeightQuorum 2016-07-21 04:24:14,412 [myid:] - INFO [Thread-9:FLEZeroWeightTest$LEThread@112] - Going to call leader election. 2016-07-21 04:24:14,431 [myid:] - INFO [Thread-0:FastLeaderElection@894] - New election. My id = 0, proposed zxid=0x0 2016-07-21 04:24:14,431 [myid:] - WARN [Thread-2:MBeanRegistry@112] - Failed to register MBean LeaderElection 2016-07-21 04:24:14,431 [myid:] - WARN [Thread-7:MBeanRegistry@112] - Failed to register MBean LeaderElection 2016-07-21 04:24:14,432 [myid:] - WARN [Thread-3:MBeanRegistry@112] - Failed to register MBean LeaderElection 2016-07-21 04:24:14,432 [myid:] - WARN [Thread-2:FastLeaderElection@876] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:108) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:873) at org.apache.zookeeper.test.FLEZeroWeightTest$LEThread.run(FLEZeroWeightTest.java:113) 2016-07-21 04:24:14,434 [myid:] - WARN [Thread-9:MBeanRegistry@112] - Failed to register MBean LeaderElection 2016-07-21 04:24:14,436 [myid:] - WARN [Thread-9:FastLeaderElection@876] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:108) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:873) at org.apache.zookeeper.test.FLEZeroWeightTest$LEThread.run(FLEZeroWeightTest.java:113) 2016-07-21 04:24:14,434 [myid:] - WARN [Thread-8:MBeanRegistry@112] - Failed to register MBean LeaderElection 2016-07-21 04:24:14,437 [myid:] - WARN [Thread-8:FastLeaderElection@876] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:108) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:873) at org.apache.zookeeper.test.FLEZeroWeightTest$LEThread.run(FLEZeroWeightTest.java:113) 2016-07-21 04:24:14,433 [myid:] - WARN [Thread-6:MBeanRegistry@112] - Failed to register MBean LeaderElection 2016-07-21 04:24:14,437 [myid:] - WARN [Thread-6:FastLeaderElection@876] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:108) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:873) at org.apache.zookeeper.test.FLEZeroWeightTest$LEThread.run(FLEZeroWeightTest.java:113) 2016-07-21 04:24:14,433 [myid:] - WARN [Thread-7:FastLeaderElection@876] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:108) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:873) at org.apache.zookeeper.test.FLEZeroWeightTest$LEThread.run(FLEZeroWeightTest.java:113) 2016-07-21 04:24:14,433 [myid:] - WARN [Thread-3:FastLeaderElection@876] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:108) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:873) at org.apache.zookeeper.test.FLEZeroWeightTest$LEThread.run(FLEZeroWeightTest.java:113) 2016-07-21 04:24:14,433 [myid:] - WARN [Thread-5:MBeanRegistry@112] - Failed to register MBean LeaderElection 2016-07-21 04:24:14,439 [myid:] - WARN [Thread-5:FastLeaderElection@876] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:108) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:873) at org.apache.zookeeper.test.FLEZeroWeightTest$LEThread.run(FLEZeroWeightTest.java:113) 2016-07-21 04:24:14,432 [myid:] - WARN [Thread-4:MBeanRegistry@112] - Failed to register MBean LeaderElection 2016-07-21 04:24:14,439 [myid:] - WARN [Thread-4:FastLeaderElection@876] - Failed to register with JMX javax.management.InstanceAlreadyExistsException: org.apache.ZooKeeperService:name0=LeaderElection at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.zookeeper.jmx.MBeanRegistry.register(MBeanRegistry.java:108) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:873) at org.apache.zookeeper.test.FLEZeroWeightTest$LEThread.run(FLEZeroWeightTest.java:113) 2016-07-21 04:24:14,439 [myid:] - INFO [Thread-5:FastLeaderElection@894] - New election. My id = 4, proposed zxid=0x0 2016-07-21 04:24:14,439 [myid:] - INFO [Thread-3:FastLeaderElection@894] - New election. My id = 2, proposed zxid=0x0 2016-07-21 04:24:14,438 [myid:] - INFO [Thread-7:FastLeaderElection@894] - New election. My id = 6, proposed zxid=0x0 2016-07-21 04:24:14,438 [myid:] - INFO [WorkerReceiver[myid=0]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,438 [myid:] - INFO [Thread-6:FastLeaderElection@894] - New election. My id = 5, proposed zxid=0x0 2016-07-21 04:24:14,440 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,437 [myid:] - INFO [Thread-8:FastLeaderElection@894] - New election. My id = 7, proposed zxid=0x0 2016-07-21 04:24:14,437 [myid:] - INFO [Thread-9:FastLeaderElection@894] - New election. My id = 8, proposed zxid=0x0 2016-07-21 04:24:14,436 [myid:] - INFO [Thread-2:FastLeaderElection@894] - New election. My id = 1, proposed zxid=0x0 2016-07-21 04:24:14,439 [myid:] - INFO [Thread-4:FastLeaderElection@894] - New election. My id = 3, proposed zxid=0x0 2016-07-21 04:24:14,446 [myid:] - INFO [/127.0.0.1:11226:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51059 2016-07-21 04:24:14,446 [myid:] - INFO [/127.0.0.1:11223:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:42458 2016-07-21 04:24:14,447 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (1, 0) 2016-07-21 04:24:14,449 [myid:] - INFO [/127.0.0.1:11229:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:60297 2016-07-21 04:24:14,449 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 0) 2016-07-21 04:24:14,450 [myid:] - INFO [/127.0.0.1:11232:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:45441 2016-07-21 04:24:14,450 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:14,452 [myid:] - INFO [/127.0.0.1:11226:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51068 2016-07-21 04:24:14,451 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (3, 0) 2016-07-21 04:24:14,452 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 0 my id = 1 2016-07-21 04:24:14,453 [myid:] - INFO [WorkerReceiver[myid=1]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,454 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (2, 1) 2016-07-21 04:24:14,455 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (3, 1) 2016-07-21 04:24:14,456 [myid:] - INFO [/127.0.0.1:11235:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:59529 2016-07-21 04:24:14,456 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (4, 0) 2016-07-21 04:24:14,457 [myid:] - INFO [/127.0.0.1:11238:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:43251 2016-07-21 04:24:14,457 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@915] - Connection broken for id 0, my id = 4, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:14,457 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,457 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (5, 0) 2016-07-21 04:24:14,458 [myid:] - INFO [/127.0.0.1:11241:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:44371 2016-07-21 04:24:14,458 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (6, 0) 2016-07-21 04:24:14,459 [myid:] - INFO [/127.0.0.1:11244:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:40443 2016-07-21 04:24:14,459 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (7, 0) 2016-07-21 04:24:14,460 [myid:] - INFO [/127.0.0.1:11247:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:57714 2016-07-21 04:24:14,460 [myid:] - INFO [WorkerSender[myid=0]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (8, 0) 2016-07-21 04:24:14,460 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (5, 1) 2016-07-21 04:24:14,461 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,461 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (6, 1) 2016-07-21 04:24:14,461 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (7, 1) 2016-07-21 04:24:14,462 [myid:] - INFO [WorkerSender[myid=1]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (8, 1) 2016-07-21 04:24:14,466 [myid:] - WARN [RecvWorker:0:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,469 [myid:] - INFO [/127.0.0.1:11226:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51082 2016-07-21 04:24:14,475 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:14,476 [myid:] - WARN [SendWorker:0:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 0 my id = 4 2016-07-21 04:24:14,479 [myid:] - INFO [WorkerReceiver[myid=1]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,479 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,482 [myid:] - INFO [WorkerReceiver[myid=4]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,482 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,484 [myid:] - INFO [/127.0.0.1:11238:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:43255 2016-07-21 04:24:14,485 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@915] - Connection broken for id 1, my id = 5, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:14,485 [myid:] - INFO [/127.0.0.1:11232:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:45444 2016-07-21 04:24:14,485 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,486 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@915] - Connection broken for id 1, my id = 3, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:14,486 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,487 [myid:] - INFO [WorkerSender[myid=5]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (6, 5) 2016-07-21 04:24:14,487 [myid:] - INFO [WorkerSender[myid=5]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (7, 5) 2016-07-21 04:24:14,489 [myid:] - INFO [WorkerReceiver[myid=5]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,490 [myid:] - INFO [WorkerSender[myid=5]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (8, 5) 2016-07-21 04:24:14,490 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:14,490 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 3 2016-07-21 04:24:14,490 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:14,490 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 5 2016-07-21 04:24:14,491 [myid:] - INFO [/127.0.0.1:11241:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:44375 2016-07-21 04:24:14,491 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@688] - Notification: 2 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,491 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@915] - Connection broken for id 1, my id = 6, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:14,492 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,492 [myid:] - INFO [/127.0.0.1:11244:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:40447 2016-07-21 04:24:14,493 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@915] - Connection broken for id 1, my id = 7, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:14,493 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,495 [myid:] - INFO [/127.0.0.1:11235:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:59547 2016-07-21 04:24:14,497 [myid:] - INFO [/127.0.0.1:11229:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:60301 2016-07-21 04:24:14,498 [myid:] - INFO [/127.0.0.1:11247:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:57718 2016-07-21 04:24:14,498 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@915] - Connection broken for id 1, my id = 2, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:14,499 [myid:] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,500 [myid:] - INFO [/127.0.0.1:11247:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:57732 2016-07-21 04:24:14,500 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:14,500 [myid:] - INFO [WorkerSender[myid=4]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (5, 4) 2016-07-21 04:24:14,501 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 7 2016-07-21 04:24:14,501 [myid:] - INFO [WorkerSender[myid=4]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (6, 4) 2016-07-21 04:24:14,502 [myid:] - INFO [WorkerSender[myid=4]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (7, 4) 2016-07-21 04:24:14,502 [myid:] - INFO [WorkerSender[myid=4]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (8, 4) 2016-07-21 04:24:14,504 [myid:] - INFO [WorkerReceiver[myid=4]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,504 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@688] - Notification: 2 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,505 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:14,505 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 6 2016-07-21 04:24:14,505 [myid:] - INFO [/127.0.0.1:11238:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:43280 2016-07-21 04:24:14,506 [myid:] - INFO [/127.0.0.1:11223:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:42457 2016-07-21 04:24:14,506 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@915] - Connection broken for id 4, my id = 5, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:14,506 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,507 [myid:] - INFO [/127.0.0.1:11226:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51083 2016-07-21 04:24:14,507 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 3 my id = 1 error = java.net.SocketException: Broken pipe 2016-07-21 04:24:14,507 [myid:] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,508 [myid:] - INFO [/127.0.0.1:11226:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51084 2016-07-21 04:24:14,508 [myid:] - INFO [WorkerReceiver[myid=0]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,508 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 6 my id = 1 error = java.net.SocketException: Broken pipe 2016-07-21 04:24:14,508 [myid:] - INFO [WorkerReceiver[myid=0]:FastLeaderElection@688] - Notification: 2 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,508 [myid:] - INFO [WorkerReceiver[myid=5]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,509 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@688] - Notification: 2 (message format version), 0 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,510 [myid:] - INFO [WorkerReceiver[myid=1]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,510 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@688] - Notification: 2 (message format version), 6 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 6 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,510 [myid:] - WARN [RecvWorker:6:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,516 [myid:] - INFO [WorkerSender[myid=2]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (3, 2) 2016-07-21 04:24:14,516 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:14,517 [myid:] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 1 my id = 2 2016-07-21 04:24:14,517 [myid:] - INFO [WorkerReceiver[myid=2]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,517 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 5 my id = 4 error = java.net.SocketException: Broken pipe 2016-07-21 04:24:14,518 [myid:] - INFO [/127.0.0.1:11235:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:59564 2016-07-21 04:24:14,518 [myid:] - INFO [WorkerSender[myid=2]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (4, 2) 2016-07-21 04:24:14,518 [myid:] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,518 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:14,520 [myid:] - INFO [/127.0.0.1:11232:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:45462 2016-07-21 04:24:14,520 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 6 my id = 1 2016-07-21 04:24:14,520 [myid:] - INFO [/127.0.0.1:11232:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:45470 2016-07-21 04:24:14,521 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 3 my id = 1 2016-07-21 04:24:14,521 [myid:] - INFO [WorkerReceiver[myid=3]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:14,521 [myid:] - INFO [/127.0.0.1:11226:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:51085 2016-07-21 04:24:14,522 [myid:] - INFO [WorkerSender[myid=3]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (4, 3) 2016-07-21 04:24:14,522 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@688] - Notification: 2 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:14,522 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@915] - Connection broken for id 5, my id = 1, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:14,522 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 5 my id = 1 error = java.net.SocketException: Broken pipe 2016-07-21 04:24:14,523 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 5 my id = 1 2016-07-21 04:24:14,522 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zoo ...[truncated 231211 chars]... .lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,861 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 3 my id = 6 2016-07-21 04:24:40,805 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 4 my id = 3 2016-07-21 04:24:40,805 [myid:] - INFO [WorkerReceiver[myid=8]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,805 [myid:] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 3 my id = 7 2016-07-21 04:24:40,804 [myid:] - INFO [WorkerReceiver[myid=6]:FastLeaderElection@688] - Notification: 2 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:40,804 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@688] - Notification: 2 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:40,862 [myid:] - INFO [WorkerReceiver[myid=8]:FastLeaderElection@688] - Notification: 2 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:40,861 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0xffffffffffffffff (n.round), LEADING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,860 [myid:] - INFO [/127.0.0.1:11235:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-21 04:24:40,859 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,863 [myid:] - INFO [main:QuorumBase@398] - Shutting down leader election QuorumPeer 2016-07-21 04:24:40,863 [myid:] - INFO [main:QuorumBase@403] - Waiting for QuorumPeer to exit thread 2016-07-21 04:24:40,863 [myid:] - WARN [SendWorker:7:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,864 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@915] - Connection broken for id 4, my id = 7, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,864 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@915] - Connection broken for id 5, my id = 4, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,864 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,864 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 4 my id = 7 error = java.net.SocketException: Broken pipe 2016-07-21 04:24:40,865 [myid:] - INFO [WorkerReceiver[myid=8]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,862 [myid:] - INFO [WorkerReceiver[myid=4]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,862 [myid:] - INFO [WorkerReceiver[myid=6]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,865 [myid:] - INFO [WorkerReceiver[myid=8]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0xffffffffffffffff (n.round), LEADING (n.state), 2 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:40,865 [myid:] - INFO [main:QuorumBase@394] - Shutting down quorum peer QuorumPeer 2016-07-21 04:24:40,865 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection@688] - Notification: 2 (message format version), 5 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 5 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:40,865 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 4 my id = 7 2016-07-21 04:24:40,864 [myid:] - WARN [RecvWorker:7:QuorumCnxManager$RecvWorker@915] - Connection broken for id 7, my id = 4, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,866 [myid:] - WARN [RecvWorker:7:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,864 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,864 [myid:] - INFO [WorkerReceiver[myid=7]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,864 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@915] - Connection broken for id 4, my id = 5, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,867 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,864 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,864 [myid:] - WARN [SendWorker:7:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 7 my id = 4 2016-07-21 04:24:40,864 [myid:] - WARN [RecvWorker:8:QuorumCnxManager$RecvWorker@915] - Connection broken for id 8, my id = 4, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,868 [myid:] - WARN [RecvWorker:8:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,863 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@915] - Connection broken for id 4, my id = 6, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,869 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,863 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@915] - Connection broken for id 4, my id = 8, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,870 [myid:] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,863 [myid:] - WARN [SendWorker:8:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,863 [myid:] - INFO [WorkerReceiver[myid=5]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,863 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,863 [myid:] - WARN [RecvWorker:6:QuorumCnxManager$RecvWorker@915] - Connection broken for id 6, my id = 4, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,871 [myid:] - WARN [RecvWorker:6:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,871 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 6 my id = 4 2016-07-21 04:24:40,871 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@688] - Notification: 2 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,870 [myid:] - WARN [SendWorker:8:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 8 my id = 4 2016-07-21 04:24:40,870 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,869 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,868 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 5 my id = 4 2016-07-21 04:24:40,868 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,867 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 8 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 8 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,866 [myid:] - INFO [/127.0.0.1:11238:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-21 04:24:40,866 [myid:] - INFO [WorkerReceiver[myid=8]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,866 [myid:] - INFO [WorkerReceiver[myid=4]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-21 04:24:40,874 [myid:] - INFO [WorkerReceiver[myid=7]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,874 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 8 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 8 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,866 [myid:] - INFO [WorkerReceiver[myid=6]:FastLeaderElection@688] - Notification: 2 (message format version), 6 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 6 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:40,866 [myid:] - INFO [Thread-5:FLEZeroWeightTest$LEThread@115] - Thread 4 got a null vote 2016-07-21 04:24:40,875 [myid:] - INFO [WorkerReceiver[myid=7]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,875 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0xffffffffffffffff (n.round), FOLLOWING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,873 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 4 my id = 5 2016-07-21 04:24:40,875 [myid:] - INFO [WorkerReceiver[myid=7]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,875 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0xffffffffffffffff (n.round), FOLLOWING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,872 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 4 my id = 6 2016-07-21 04:24:40,872 [myid:] - INFO [WorkerReceiver[myid=5]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,876 [myid:] - INFO [WorkerReceiver[myid=7]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,876 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 8 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 8 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,876 [myid:] - WARN [WorkerSender[myid=5]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-21 04:24:40,877 [myid:] - WARN [RecvWorker:7:QuorumCnxManager$RecvWorker@915] - Connection broken for id 7, my id = 5, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,877 [myid:] - WARN [RecvWorker:7:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,872 [myid:] - WARN [SendWorker:4:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 4 my id = 8 2016-07-21 04:24:40,877 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0xffffffffffffffff (n.round), FOLLOWING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,877 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@915] - Connection broken for id 5, my id = 8, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,877 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@915] - Connection broken for id 5, my id = 6, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,877 [myid:] - WARN [SendWorker:8:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,877 [myid:] - WARN [RecvWorker:8:QuorumCnxManager$RecvWorker@915] - Connection broken for id 8, my id = 5, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,879 [myid:] - WARN [RecvWorker:8:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,877 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@915] - Connection broken for id 5, my id = 7, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,877 [myid:] - WARN [SendWorker:7:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,877 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,877 [myid:] - WARN [RecvWorker:6:QuorumCnxManager$RecvWorker@915] - Connection broken for id 6, my id = 5, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,881 [myid:] - WARN [RecvWorker:6:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,877 [myid:] - INFO [WorkerSender[myid=5]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-21 04:24:40,877 [myid:] - INFO [main:QuorumBase@398] - Shutting down leader election QuorumPeer 2016-07-21 04:24:40,876 [myid:] - INFO [WorkerReceiver[myid=7]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,876 [myid:] - INFO [Thread-9:FLEZeroWeightTest$LEThread@125] - Finished election: 8, 2 2016-07-21 04:24:40,876 [myid:] - INFO [WorkerReceiver[myid=8]:FastLeaderElection@688] - Notification: 2 (message format version), 6 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 6 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:40,875 [myid:] - INFO [WorkerReceiver[myid=6]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,881 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 6 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 6 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,881 [myid:] - INFO [main:QuorumBase@403] - Waiting for QuorumPeer to exit thread 2016-07-21 04:24:40,880 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 6 my id = 5 2016-07-21 04:24:40,880 [myid:] - WARN [SendWorker:7:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 7 my id = 5 2016-07-21 04:24:40,880 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,879 [myid:] - WARN [SendWorker:8:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 8 my id = 5 2016-07-21 04:24:40,878 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,883 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,878 [myid:] - WARN [RecvWorker:5:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,878 [myid:] - INFO [WorkerReceiver[myid=5]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-21 04:24:40,883 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,883 [myid:] - INFO [WorkerReceiver[myid=7]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,883 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 5 my id = 7 2016-07-21 04:24:40,883 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,882 [myid:] - INFO [main:QuorumBase@394] - Shutting down quorum peer QuorumPeer 2016-07-21 04:24:40,882 [myid:] - INFO [WorkerReceiver[myid=6]:FastLeaderElection@688] - Notification: 2 (message format version), 7 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 7 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:40,884 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 5 my id = 6 2016-07-21 04:24:40,884 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 8 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 8 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,884 [myid:] - WARN [SendWorker:5:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 5 my id = 8 2016-07-21 04:24:40,883 [myid:] - INFO [WorkerReceiver[myid=8]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,889 [myid:] - INFO [WorkerReceiver[myid=6]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,885 [myid:] - INFO [/127.0.0.1:11241:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-21 04:24:40,890 [myid:] - INFO [Thread-7:FLEZeroWeightTest$LEThread@115] - Thread 6 got a null vote 2016-07-21 04:24:40,891 [myid:] - WARN [SendWorker:7:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,891 [myid:] - WARN [SendWorker:7:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 7 my id = 6 2016-07-21 04:24:40,890 [myid:] - INFO [WorkerReceiver[myid=6]:FastLeaderElection@688] - Notification: 2 (message format version), 7 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 7 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version) 2016-07-21 04:24:40,889 [myid:] - INFO [WorkerReceiver[myid=8]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0xffffffffffffffff (n.round), FOLLOWING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,893 [myid:] - INFO [WorkerReceiver[myid=6]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-21 04:24:40,892 [myid:] - WARN [SendWorker:8:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,892 [myid:] - WARN [RecvWorker:6:QuorumCnxManager$RecvWorker@915] - Connection broken for id 6, my id = 7, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,894 [myid:] - WARN [RecvWorker:6:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,891 [myid:] - WARN [RecvWorker:6:QuorumCnxManager$RecvWorker@915] - Connection broken for id 6, my id = 8, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,894 [myid:] - WARN [RecvWorker:6:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,891 [myid:] - WARN [RecvWorker:8:QuorumCnxManager$RecvWorker@915] - Connection broken for id 8, my id = 6, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,894 [myid:] - WARN [RecvWorker:8:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,891 [myid:] - WARN [RecvWorker:7:QuorumCnxManager$RecvWorker@915] - Connection broken for id 7, my id = 6, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:40,895 [myid:] - WARN [RecvWorker:7:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,891 [myid:] - INFO [main:QuorumBase@398] - Shutting down leader election QuorumPeer 2016-07-21 04:24:40,891 [myid:] - INFO [WorkerReceiver[myid=7]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,895 [myid:] - INFO [main:QuorumBase@403] - Waiting for QuorumPeer to exit thread 2016-07-21 04:24:40,894 [myid:] - INFO [WorkerReceiver[myid=8]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,894 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,894 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:40,896 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 6 my id = 7 2016-07-21 04:24:40,893 [myid:] - WARN [SendWorker:8:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 8 my id = 6 2016-07-21 04:24:40,896 [myid:] - WARN [SendWorker:6:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 6 my id = 8 2016-07-21 04:24:40,896 [myid:] - INFO [WorkerReceiver[myid=8]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0xffffffffffffffff (n.round), FOLLOWING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,896 [myid:] - INFO [main:QuorumBase@394] - Shutting down quorum peer QuorumPeer 2016-07-21 04:24:40,896 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 8 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 8 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:40,897 [myid:] - INFO [/127.0.0.1:11244:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-21 04:24:40,897 [myid:] - INFO [WorkerReceiver[myid=8]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,898 [myid:] - WARN [RecvWorker:7:QuorumCnxManager$RecvWorker@915] - Connection broken for id 7, my id = 8, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:47,925 [myid:] - WARN [RecvWorker:7:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,898 [myid:] - WARN [RecvWorker:8:QuorumCnxManager$RecvWorker@915] - Connection broken for id 8, my id = 7, error = java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.net.SocketInputStream.read(SocketInputStream.java:210) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:900) 2016-07-21 04:24:47,926 [myid:] - WARN [RecvWorker:8:QuorumCnxManager$RecvWorker@918] - Interrupting SendWorker 2016-07-21 04:24:40,898 [myid:] - INFO [main:QuorumBase@398] - Shutting down leader election QuorumPeer 2016-07-21 04:24:43,790 [myid:] - INFO [WorkerSender[myid=3]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-21 04:24:43,790 [myid:] - INFO [WorkerReceiver[myid=3]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-21 04:24:43,642 [myid:] - INFO [WorkerSender[myid=1]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-21 04:24:43,642 [myid:] - INFO [WorkerReceiver[myid=1]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-21 04:24:43,585 [myid:] - INFO [WorkerSender[myid=4]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-21 04:24:43,574 [myid:] - INFO [WorkerSender[myid=6]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-21 04:24:40,898 [myid:] - WARN [SendWorker:8:QuorumCnxManager$SendWorker@837] - Exception when using channel: for id 8 my id = 7 error = java.net.SocketException: Socket closed 2016-07-21 04:24:40,898 [myid:] - INFO [WorkerReceiver[myid=7]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:40,898 [myid:] - INFO [WorkerReceiver[myid=8]:FastLeaderElection@688] - Notification: 2 (message format version), 4 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:47,927 [myid:] - WARN [SendWorker:8:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 8 my id = 7 2016-07-21 04:24:47,926 [myid:] - INFO [main:QuorumBase@403] - Waiting for QuorumPeer to exit thread 2016-07-21 04:24:47,926 [myid:] - WARN [SendWorker:7:QuorumCnxManager$SendWorker@832] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:982) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:63) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:820) 2016-07-21 04:24:47,928 [myid:] - INFO [/127.0.0.1:11247:QuorumCnxManager$Listener@638] - Received connection request /127.0.0.1:57823 2016-07-21 04:24:47,928 [myid:] - INFO [WorkerSender[myid=7]:QuorumCnxManager@276] - Have smaller server identifier, so dropping the connection: (8, 7) 2016-07-21 04:24:47,928 [myid:] - INFO [WorkerReceiver[myid=8]:QuorumHierarchical@136] - 9, 9, 3 2016-07-21 04:24:47,928 [myid:] - WARN [SendWorker:7:QuorumCnxManager$SendWorker@841] - Send worker leaving thread id 7 my id = 8 2016-07-21 04:24:47,929 [myid:] - INFO [main:QuorumBase@394] - Shutting down quorum peer QuorumPeer 2016-07-21 04:24:47,929 [myid:] - WARN [WorkerSender[myid=8]:QuorumCnxManager@455] - Cannot open channel to 4 at election address /127.0.0.1:11235 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-21 04:24:47,929 [myid:] - INFO [WorkerSender[myid=7]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-21 04:24:47,929 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection@688] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0xffffffffffffffff (n.round), FOLLOWING (n.state), 3 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:47,930 [myid:] - INFO [WorkerReceiver[myid=7]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-21 04:24:47,930 [myid:] - WARN [/127.0.0.1:11247:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11244 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.receiveConnection(QuorumCnxManager.java:367) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:640) 2016-07-21 04:24:47,930 [myid:] - INFO [/127.0.0.1:11247:QuorumCnxManager$Listener@661] - Leaving listener 2016-07-21 04:24:47,930 [myid:] - INFO [WorkerReceiver[myid=8]:FastLeaderElection@688] - Notification: 2 (message format version), 7 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 7 (n.sid), 0x0 (n.peerEPoch), FOLLOWING (my state)0 (n.config version) 2016-07-21 04:24:47,931 [myid:] - INFO [main:QuorumBase@398] - Shutting down leader election QuorumPeer 2016-07-21 04:24:47,931 [myid:] - INFO [main:QuorumBase@403] - Waiting for QuorumPeer to exit thread 2016-07-21 04:24:47,931 [myid:] - INFO [WorkerReceiver[myid=8]:FastLeaderElection$Messenger$WorkerReceiver@440] - WorkerReceiver is down 2016-07-21 04:24:47,932 [myid:] - WARN [WorkerSender[myid=8]:QuorumCnxManager@455] - Cannot open channel to 7 at election address /127.0.0.1:11244 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:441) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:482) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:419) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465) at java.lang.Thread.run(Thread.java:745) 2016-07-21 04:24:47,932 [myid:] - INFO [WorkerSender[myid=8]:FastLeaderElection$Messenger$WorkerSender@470] - WorkerSender is down 2016-07-21 04:24:47,933 [myid:] - INFO [main:ZKTestCase$1@70] - FAILED testZeroWeightQuorum java.lang.AssertionError: Threads didn't join at org.junit.Assert.fail(Assert.java:88) at org.apache.zookeeper.test.FLEZeroWeightTest.testZeroWeightQuorum(FLEZeroWeightTest.java:167) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2016-07-21 04:24:47,933 [myid:] - INFO [main:ZKTestCase$1@60] - FINISHED testZeroWeightQuorum {noformat} |
flaky, flaky-test | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 21 weeks ago | 0|i31cin: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2480 | Exhibitor and zookeeper, Cannot open channel to x at election address |
Bug | Resolved | Minor | Not A Problem | Unassigned | Kerem Yazici | Kerem Yazici | 19/Jul/16 07:11 | 19/Jul/16 11:45 | 19/Jul/16 11:45 | 3.4.8 | quorum | 0 | 1 | We have 5 node ensemble setup in our tst environment and we are seeing the error below on two of the nodes. We have 3 nodes on the first data centre and 2 nodes on the second data centre and all of them are managed by exhbitor. The problem is that the nodes from the same data centres cannot talk to the leader node on the same data centre but can talk to the other nodes without any issue. If I bounce the leader node to force another node to get elected then the other nodes that are on the same data centre start throwing the below exception. I'm sure the problem is with the dns name resolution but I would like to understand how zookeeper resolves these dns names and what might be the issue here so I can go back to our unix team and get this fixed. {code} 2016-07-19 10:48:54,711 [myid:4] - WARN [WorkerSender[myid=4]:QuorumCnxManager@400] - Cannot open channel to 5 at election address server1.dns.name/192.168.1.3:4882 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:354) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(Fast LeaderElection.java:452) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLead erElection.java:433) at java.lang.Thread.run(Thread.java:745) {code} {code:title=zoo.cfg|borderStyle=solid} #Auto-generated by Exhibitor - Fri Jul 15 11:30:52 BST 2016 #Fri Jul 15 11:30:52 BST 2016 server.2=server2.dns.name\:4881\:4882\:observer autopurge.purgeInterval=4 server.1=server1.dns.name\:4881\:4882 initLimit=50 syncLimit=2 clientPort=4880 tickTime=2001 server.5=server5.dns.name\:4881\:4882 dataDir=/opt/app/datafabric/data/zookeeper server.4=server4.dns.name\:4881\:4882 dataLogDir=/path/to/datalogdir server.3=server3.dns.name\:4881\:4882 ~ {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 35 weeks, 2 days ago | 0|i316yv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2479 | Add 'electionTimeTaken' value in LeaderMXBean and FollowerMXBean |
Improvement | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 19/Jul/16 01:01 | 31/Mar/17 05:01 | 20/Dec/16 08:53 | 3.4.10, 3.5.3, 3.6.0 | quorum | 0 | 6 | ZOOKEEPER-1045 | The idea of this jira is to expose {{time taken}} for the leader election via jmx Leader, Follower beans. | 9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 3 years, 13 weeks ago | 0|i316i7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2478 | Duplicate 'the' on Zab in words Wiki page |
Bug | Resolved | Trivial | Fixed | Flavio Paiva Junqueira | Richard Shaw | Richard Shaw | 18/Jul/16 21:12 | 31/Jan/19 08:48 | 19/Jul/16 01:02 | documentation | 0 | 4 | There's a duplicate 'the' on the Zookeeper wiki Zab in words Phase 1: Establish an epoch 4.1 |
documentation | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 7 weeks ago | https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zab1.0 | 0|i316c7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2477 | documentation should refer to Java cli shell and not C cli shell |
Bug | Closed | Major | Fixed | Abraham Fine | Patrick D. Hunt | Patrick D. Hunt | 17/Jul/16 13:09 | 04/Sep/16 01:28 | 29/Jul/16 18:02 | 3.4.8, 3.5.2 | 3.4.9, 3.5.3, 3.6.0 | documentation | 0 | 5 | ZOOKEEPER-2494 | The documentation tends to refer to the c cli shell when citing examples of how to interact with ZK, rather than using the Java cli shell. Given the Java cli is much better maintained and more featureful the docs should refer to that instead. Also the c cli was originally meant to be a sample/example of c client usage rather than a true cli tool. | newbie | 9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 3 years, 33 weeks, 6 days ago |
Reviewed
|
0|i313qv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2476 | Not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster |
Bug | Resolved | Critical | Not A Bug | Alexander Shraer | Jordan Zimmerman | Jordan Zimmerman | 15/Jul/16 20:25 | 16/Jul/16 20:21 | 16/Jul/16 20:21 | 3.5.1 | quorum, server | 0 | 2 | Contrary to the documentation, it is not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster. KeeperException.NewConfigNoQuorum is thrown instead. PrepRequestProcessor should recognize this special case and let it pass. Test will be enclosed shortly. I'll work on a fix as well, but I imagine that [~shralex] will want to look at it. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Important
|
3 years, 35 weeks, 5 days ago | 0|i312g7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2475 | Include ZKClientConfig API in zoookeeper javadoc |
Bug | Patch Available | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 14/Jul/16 10:06 | 05/Feb/20 07:11 | 3.5.2 | 3.7.0, 3.5.8 | build, documentation | 0 | 3 | Generate zookeeper api doc using {{ant javadoc}} command open build/docs/api/index.html, ZKClientConfig is not present |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 32 weeks, 2 days ago | 0|i30z8f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2474 | add a way for client to reattach to a session when using ZKClientConfig |
Bug | Closed | Major | Fixed | maoling | Timothy James Ward | Timothy James Ward | 14/Jul/16 08:52 | 20/May/19 13:50 | 12/Mar/19 14:09 | 3.5.2 | 3.6.0, 3.5.5 | java client | 0 | 4 | 0 | 14400 | The new constructors for ZooKeeper instances take a ZKClientConfig, which is great, however there is no way to reattach to an existing session.
New constructors should be added to allow passing a session id and password when using ZKClientConfig. |
100% | 100% | 14400 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 1 week, 2 days ago | 0|i30z2n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2473 | ZooInspector support for read-only mode |
Improvement | Open | Minor | Unresolved | Benjamin Jaton | Benjamin Jaton | Benjamin Jaton | 11/Jul/16 13:28 | 11/Jul/16 13:36 | 3.5.1 | contrib | 0 | 1 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 36 weeks, 3 days ago | 0|i30svz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2472 | ServerConfig#parse(String[]) parse params has problems |
Bug | Resolved | Minor | Duplicate | Unassigned | yangkun | yangkun | 10/Jul/16 22:31 | 10/Jul/16 23:43 | 10/Jul/16 23:18 | server | 0 | 2 | When debug zookeeper, run ZooKeeperServerMain then pass 4 args, e.g: 2181 F:\\zk\\data 2000 30, that is: clientPort = 2181 dataDir = F:\\zk\\data tickTime = 2000 maxClientCnxns = 30 But ServerConfig#parse(String[]) method has a little problem: public void parse(String[] args) { ... if (args.length == 3) { tickTime = Integer.parseInt(args[2]); } if (args.length == 4) { maxClientCnxns = Integer.parseInt(args[3]); } } The problem is: if (args.length == 4) { maxClientCnxns = Integer.parseInt(args[3]); } It can't parse tickTime, igone the tickTime.This coe snippet should be: if (args.length == 4) { tickTime = Integer.parseInt(args[2]); maxClientCnxns = Integer.parseInt(args[3]); } |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 36 weeks, 3 days ago | 0|i30rq7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2471 | Java Zookeeper Client incorrectly considers time spent sleeping as time spent connecting, potentially resulting in infinite reconnect loop |
Bug | In Progress | Major | Unresolved | Michael Han | Dan Benediktson | Dan Benediktson | 08/Jul/16 11:52 | 12/Dec/19 20:26 | 3.5.3 | java client | 0 | 8 | 0 | 3000 | ZOOKEEPER-2869 | all | ClientCnxnSocket uses a member variable "now" to track the current time, and lastSend / lastHeard variables to track socket liveness. Implementations, and even ClientCnxn itself, are expected to call both updateNow() to reset "now" to System.currentTimeMillis, and then call updateLastSend()/updateLastHeard() on IO completions. This is a fragile contract, so it's not surprising that there's a bug resulting from it: ClientCnxn.SendThread.run() calls updateLastSendAndHeard() as soon as startConnect() returns, but it does not call updateNow() first. I expect when this was written, either the expectation was that startConnect() was an asynchronous operation and that updateNow() would have been called very recently, or simply the requirement to call updateNow() was forgotten at this point. As far as I can see, this bug has been present since the "updateNow" method was first introduced in the distant past. As it turns out, since startConnect() calls HostProvider.next(), which can sleep, quite a lot of time can pass, leaving a big gap between "now" and now. If you are using very short session timeouts (one of our ZK ensembles has many clients using a 1-second timeout), this is potentially disastrous, because the sleep time may exceed the connection timeout itself, which can potentially result in the Java client being stuck in a perpetual reconnect loop. The exact code path it goes through in this case is complicated, because there has to be a previously-closed socket still waiting in the selector (otherwise, the first timeout evaluation will not fail because "now" still hasn't been updated, and then the actual connect timeout will be applied in ClientCnxnSocket.doTransport()) so that select() will harvest the IO from the previous socket and updateNow(), resulting in the next loop through ClientCnxnSocket.SendThread.run() observing the spurious timeout and failing. In practice it does happen to us fairly frequently; we only got to the bottom of the bug yesterday. Worse, when it does happen, the Zookeeper client object is rendered unusable: it's stuck in a perpetual reconnect loop where it keeps sleeping, opening a socket, and immediately closing it. I have a patch. Rather than calling updateNow() right after startConnect(), my fix is to remove the "now" member variable and the updateNow() method entirely, and to instead just call System.currentTimeMillis() whenever time needs to be evaluated. I realize there is a benefit (aside from a trivial micro-optimization not worth worrying about) to having the time be "fixed", particularly for truth in the logging: if time is fixed by an updateNow() call, then the log for a timeout will still show exactly the same value the code reasoned about. However, this benefit is in my opinion not enough to merit the fragility of the contract which led to this (for us) highly impactful and difficult-to-find bug in the first place. I'm currently running ant tests locally against my patch on trunk, and then I'll upload it here. |
100% | 100% | 3000 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 3 weeks ago | 0|i30pkn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2470 | ServerConfig#parse(String[]) ignores tickTime |
Bug | Closed | Trivial | Fixed | Edward Ribeiro | Alexander Shraer | Alexander Shraer | 08/Jul/16 00:59 | 31/Mar/17 05:01 | 21/Dec/16 22:12 | 3.4.7, 3.5.1 | 3.4.10, 3.5.3, 3.6.0 | server | 0 | 8 | ZOOKEEPER-2656 | Based on bug report from ykgarfield: ServerConfig#parse(String[]) method has the following code: public void parse(String[] args) { ... if (args.length == 3) { tickTime = Integer.parseInt(args[2]); } if (args.length == 4) { maxClientCnxns = Integer.parseInt(args[3]); } } ``` So if args.length == 4 tickTime isn't parsed. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 12 weeks, 6 days ago | 0|i30osn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2469 | infinite loop in ZK re-login |
Bug | Patch Available | Major | Unresolved | Sergey Shelukhin | Sergey Shelukhin | Sergey Shelukhin | 07/Jul/16 20:55 | 05/Aug/16 15:06 | 0 | 9 | {noformat} int retry = 1; while (retry >= 0) { try { reLogin(); break; } catch (LoginException le) { if (retry > 0) { --retry; // sleep for 10 seconds. try { Thread.sleep(10 * 1000); } catch (InterruptedException e) { LOG.error("Interrupted during login retry after LoginException:", le); throw le; } } else { LOG.error("Could not refresh TGT for principal: " + principal + ".", le); } } } {noformat} will retry forever. Should return like the one above |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 3 years, 32 weeks, 6 days ago | 0|i30ojj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2468 | SetQuota and DelQuota |
Bug | Patch Available | Major | Unresolved | SREENIVASULUDANDU | Joshi Shankar | Joshi Shankar | 07/Jul/16 01:34 | 12/Dec/16 03:39 | 3.4.6, 3.4.8, 3.5.1 | java client | 1 | 5 | Windows /Linux | Setquota and delquota commands does not work Steps to reproduce: 1. Create Test Node 1 create -c /TestZookeeperNodeNumber "testdata" 2. Create Test Node 2 create -c /TestZookeeperNodeBytes "testdatabytes" 3. Set Quota Using setquota -n 1 /TestZookeeperNodeNumber 4. Set Quota Using setquota -b 10 /TestZookeeperNodeBytes AlreadySelectedException is thrown by Apache CLI. It is bug in Apache CLI (https://issues.apache.org/jira/browse/CLI-183) We can fix by upgrading Apache CLI From(*commons-cli-1.2.jar*) to (*commons-cli-1.3.1.jar*) Client Operation Log: [zk: localhost:2181(CONNECTED) 2] create -c /TestZookeeperNodeNumber "testdata" Created /TestZookeeperNodeNumber [zk: localhost:2181(CONNECTED) 3] create -c /TestZookeeperNodeBytes "testdatabytes" Created /TestZookeeperNodeBytes [zk: localhost:2181(CONNECTED) 4] setquota -n 1 /TestZookeeperNodeNumber [zk: localhost:2181(CONNECTED) 5] setquota -b 10 /TestZookeeperNodeBytes The option 'b' was specified but an option from this group has already been selected: 'n' ZooKeeper -server host:port cmd args addauth scheme auth close config [-c] [-w] [-s] connect host:port create [-s] [-e] [-c] path [data] [acl] delete [-v version] path deleteall path delquota [-n|-b] path get [-s] [-w] path getAcl [-s] path history listquota path ls [-s] [-w] path ls2 path [watch] printwatches on|off quit reconfig [-s] [-v version] [[-file path] | [-members serverID=host:port1:port2;port3[,...]*] ] | [-add serverId=host:port1:port2;port3[,...]]* [-remove serverId[,...]*] redo cmdno removewatches path [-c|-d|-a] [-l] rmr path set [-s] [-v version] path data setAcl [-s] [-v version] path acl setquota -n|-b val path stat [-w] path sync path |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 14 weeks, 3 days ago | https://issues.apache.org/jira/browse/CLI-183 | 0|i30mun: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2467 | NullPointerException when redo Command is passed negative value |
Bug | Closed | Minor | Fixed | Rakesh Kumar Singh | Joshi Shankar | Joshi Shankar | 06/Jul/16 22:51 | 31/Mar/17 05:01 | 16/Oct/16 09:18 | 3.4.8, 3.5.1, 3.5.2 | 3.4.10, 3.5.3, 3.6.0 | java client | 0 | 6 | ZOOKEEPER-2587 | Linux , windows | When negative value of argument is passed to redo command . [zk: localhost:2181(CONNECTED) 0] redo -1 Exception in thread "main" java.lang.NullPointerException at java.util.StringTokenizer.<init>(Unknown Source) at java.util.StringTokenizer.<init>(Unknown Source) at org.apache.zookeeper.ZooKeeperMain$MyCommandOptions.parseCommand(ZooKeeperMain.java:227) at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:645) at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:588) at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:360) at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:323) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:282) |
9223372036854775807 | No Perforce job exists for this issue. | 8 | 9223372036854775807 | 3 years, 22 weeks, 4 days ago | 0|i30ms7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2466 | Client skips servers when trying to connect |
Bug | Patch Available | Major | Unresolved | Michael Han | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 05/Jul/16 17:04 | 14/Dec/19 06:07 | 3.7.0 | c client | 0 | 7 | ZOOKEEPER-1856 | I've been looking at {{Zookeeper_simpleSystem::testFirstServerDown}} and I observed the following behavior. The list of servers to connect contains two servers, let's call them S1 and S2. The client never connects, but the odd bit is the sequence of servers that the client tries to connect to: {noformat} S1 S2 S1 S1 S1 <keeps repeating S1> {noformat} It intrigued me that S2 is only tried once and never again. Checking the code, here is what happens. Initially, {{zh->reconfig}} is 1, so in {{zoo_cycle_next_server}} we return an address from {{get_next_server_in_reconfig}}, which is taken from {{zh->addrs_new}} in this test case. The attempt to connect fails, and {{handle_error}} is invoked in the error handling path. {{handle_error}} actually invokes {{addrvec_next}} which changes the address pointer to the next server on the list. After two attempts, it decides that it has tried all servers in {{zoo_cycle_next_server}} and sets {{zh->reconfig}} to zero. Once {{zh->reconfig == 0}}, we have that each call to {{zoo_cycle_next_server}} moves the address pointer to the next server in {{zh->addrs}}. But, given that {{handle_error}} also moves the pointer to the next server, we end up moving the pointer ahead twice upon every failed attempt to connect, which is wrong. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 1 year, 17 weeks ago | 0|i30k7z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2465 | Documentation copyright notice is out of date. |
Bug | Closed | Blocker | Fixed | Edward Ribeiro | Chris Nauroth | Chris Nauroth | 05/Jul/16 13:38 | 31/Mar/17 05:01 | 30/Dec/16 17:59 | 3.4.10, 3.5.3, 3.6.0 | documentation | 0 | 8 | As reported by [~eribeiro], all of the documentation pages show a copyright notice dating "2008-2013". This issue tracks updating the copyright notice on all documentation pages to show the current year. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 11 weeks, 6 days ago | 0|i30jsv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2464 | NullPointerException on ContainerManager |
Bug | Closed | Major | Fixed | Jordan Zimmerman | Stefano Salmaso | Stefano Salmaso | 04/Jul/16 18:21 | 17/May/17 23:43 | 11/Feb/17 10:09 | 3.5.1 | 3.5.3, 3.6.0 | server | 2 | 8 | ZOOKEEPER-2705, ZOOKEEPER-2680 | I would like to expose you to a problem that we are experiencing. We are using a cluster of 7 zookeeper and we use them to implement a distributed lock using Curator (http://curator.apache.org/curator-recipes/shared-reentrant-lock.html) So .. we tried to play with the servers to see if everything worked properly and we stopped and start servers to see that the system worked well (like stop 03, stop 05, stop 06, start 05, start 06, start 03) We saw a strange behavior. The number of znodes grew up without stopping (normally we had 4000 or 5000, we got to 60,000 and then we stopped our application) In zookeeeper logs I saw this (on leader only, one every minute) 2016-07-04 14:53:50,302 [myid:7] - ERROR [ContainerManagerTask:ContainerManager$1@84] - Error checking containers java.lang.NullPointerException at org.apache.zookeeper.server.ContainerManager.getCandidates(ContainerManager.java:151) at org.apache.zookeeper.server.ContainerManager.checkContainers(ContainerManager.java:111) at org.apache.zookeeper.server.ContainerManager$1.run(ContainerManager.java:78) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) We have not yet deleted the data ... so the problem can be reproduced on our servers |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 5 weeks, 5 days ago | 0|i30ijr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2463 | TestMulti is broken in the C client |
Bug | Closed | Blocker | Cannot Reproduce | Unassigned | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 03/Jul/16 11:38 | 19/Dec/19 18:01 | 24/Nov/16 11:31 | 3.5.3 | 0 | 2 | I noticed that all multi tests seem to be timing out and they are failing silently. This is the output that I'm observing: {noformat} Zookeeper_multi::testCreate : assertion : elapsed 10001 Zookeeper_multi::testCreateDelete : assertion : elapsed 10001 Zookeeper_multi::testInvalidVersion : assertion : elapsed 10001 Zookeeper_multi::testNestedCreate : assertion : elapsed 10001 Zookeeper_multi::testSetData : assertion : elapsed 10001 Zookeeper_multi::testUpdateConflict : assertion : elapsed 10001 Zookeeper_multi::testDeleteUpdateConflict : assertion : elapsed 10001 Zookeeper_multi::testAsyncMulti : assertion : elapsed 10001 Zookeeper_multi::testMultiFail : assertion : elapsed 10001 Zookeeper_multi::testCheck : assertion : elapsed 10001 Zookeeper_multi::testWatch : assertion : elapsed 10001 Zookeeper_multi::testSequentialNodeCreateInAsyncMulti : assertion : elapsed 10001 {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 17 weeks ago | 0|i30h9r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2462 | force authentication/authorization |
New Feature | Open | Minor | Unresolved | Unassigned | Botond Hejj | Botond Hejj | 01/Jul/16 10:50 | 04/Oct/19 10:55 | server | 1 | 4 | ZOOKEEPER-1634, ZOOKEEPER-2526 | This change introduces two new config options to force authorization and authentication: 1. disableWorldACL The purpose of this option is disable the builtin mechanism which authorizes everyone. If it is turned on than the world/anyone usage is ignored. ZooKeeper will not check operations based on world/anyone. This option is useful to force some kind of authorization mechanism. This restriction is useful in a strictly audited environment. 2. forceAuthentication If this option is turned on than ZooKeeper won't authorize any operation if the user has not authenticated either with SASL or with addAuth. There is way to enforce SASL authentication but currently there is no way to enforce authentication using the plugin mechanism. Enforcing authentication for that is more tricky since authentication can come any time later. This option doesn't drop the connection if there was no authentication. It is only throwing NoAuth for any operation until the Auth packet arrives. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 1 year, 36 weeks, 2 days ago | 0|i30fn3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2461 | There is no difference between the observer and the participants in the leader election algorithm |
Improvement | Open | Major | Unresolved | Ryan Zhang | Ryan Zhang | Ryan Zhang | 30/Jun/16 13:59 | 05/Feb/20 07:16 | 3.5.0 | 3.7.0, 3.5.8 | quorum | 0 | 5 | We have observed a case that when a leader machine crashes hard, non-voting learners take a long time to detect the new leader. After looking at the details more carefully, we identified one potential improvement (and one bug fixed in the 3.5). | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 |
Patch
|
1 year, 14 weeks, 1 day ago | 0|i30dt3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2460 | Remove javacc dependency from public Maven pom |
Bug | Closed | Critical | Fixed | Enrico Olivelli | Enrico Olivelli | Enrico Olivelli | 30/Jun/16 03:39 | 17/May/17 23:49 | 16/Mar/17 14:09 | 3.5.2 | 3.5.3, 3.6.0 | java client | 1 | 7 | BOOKKEEPER-970, ZOOKEEPER-1078 | during the vote of 3.5.2-ALPHA RC 0 we found a Maven dependency to javacc in published pom for zookeeper {code} <dependency> <groupId>net.java.dev.javacc</groupId> <artifactId>javacc</artifactId> <version>5.0</version><scope>compile</scope> </dependency> {code} this dependency appears not to be useful and should be removed this was the tested pom: https://repository.apache.org/content/groups/staging/org/apache/zookeeper/zookeeper/3.5.2-alpha/zookeeper-3.5.2-alpha.pom |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 1 week ago | 0|i30cqf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2459 | Update NOTICE file with Netty notice |
Bug | Closed | Blocker | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 29/Jun/16 18:43 | 21/Jul/16 16:18 | 30/Jun/16 00:13 | 3.5.2, 3.6.0 | 0 | 4 | Bubbling up the Netty notice. According to the ALv2 item 4, we need to include it in our top notice, it isn't sufficient to have it in the bundle. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 38 weeks ago |
Reviewed
|
0|i30c7z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2458 | Remove license file for servlet-api dependency |
Bug | Closed | Major | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 29/Jun/16 17:48 | 21/Jul/16 16:18 | 29/Jun/16 23:59 | 3.5.2, 3.6.0 | 3.5.2, 3.6.0 | 0 | 3 | ZOOKEEPER-2457 | In ZOOKEEPER-2235, we changed the license of the servlet-api dependency to the correct one ALv2, but didn't remove the CDDL license file, which is incorrect. This jira removes the incorrect license file. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 38 weeks ago |
Reviewed
|
0|i30c4f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2457 | Remove license file for servlet-api dependency |
Bug | Closed | Major | Duplicate | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 29/Jun/16 17:47 | 19/Dec/19 18:01 | 29/Jun/16 17:55 | 3.5.2 | 0 | 1 | ZOOKEEPER-2458 | In ZOOKEEPER-2235, we changed the license of the servlet-api dependency to the correct one ALv2, but didn't remove the CDDL license file, which is incorrect. This jira removes the incorrect license file. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 38 weeks, 1 day ago | 0|i30c3z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2456 | Provide API to get user from different authentication providers |
Improvement | Open | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 29/Jun/16 15:32 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | server | 0 | 2 | Currently zookeeper server same field is used to store both user name and password Provide a mechanism to separate the user and password either by adding new field or by adding new API DETAILS: org.apache.zookeeper.data.Id class is used to store scheme and id. {code} public Id( String scheme, String id) {code} id field holds only user in most cases but in some cases it holds user as well as password By default there are only four authentication provider DigestAuthenticationProvider IPAuthenticationProvider SASLAuthenticationProvider X509AuthenticationProvider In code we can check if scheme is digest then {{id.split(":")\[0\]}} is user otherwise id is user. This will work only if we are limited to above four authentication provider But Custom authentication provider are very important and are very commonly used. How the zookeeper code will know what is the user, is it id or {{id.split(":")\[0\]}} or anything else ? So there is need to add new API which AuthenticationProvider providers implement to define what is user. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 38 weeks, 1 day ago | 0|i30c13: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2455 | unexpected server response ZRUNTIMEINCONSISTENCY |
Bug | Open | Major | Unresolved | Unassigned | pradeep | pradeep | 28/Jun/16 02:01 | 05/Feb/20 07:17 | 3.5.1 | 3.7.0, 3.5.8 | c client | 0 | 6 | Hi Folks, I am hitting an error in my C client code and below are the set of operations I perform: 1. Zookeeper Client connected to Zookeeper server S1 and a new server S2 gets added. 2. monitor zookeeper server config at the client and on change of server config, call zoo_set_server from the client 3. client can issue operations like zoo_get just after the call to zoo_set_servers 4. I can see that the zookeeper thread logs connect to the new server just after the zoo_get call 2016-04-11 03:46:50,655:1207(0xf26ffb40):ZOO_INFO@check_events@2345: initiated connection to server [128.0.0.5:61728] 2016-04-11 03:46:50,658:1207(0xf26ffb40):ZOO_INFO@check_events@2397: session establishment complete on server [128.0.0.5:61728], sessionId=0x4000001852c000c, negotiated timeout=20000 5. Some times I find errors like below: 2016-04-11 03:46:50,662:1207(0xf26ffb40):ZOO_ERROR@handle_socket_error_msg@2923: Socket [128.0.0.5:61728] zk retcode=-2, errno=115(Operation now in progress): unexpected server response: expected 0x570b82fa, but received 0x570b82f9 1. zoo_get returns (-2) indicating that ZRUNTIMEINCONSISTENCY<http://zookeeper.sourcearchive.com/documentation/3.2.2plus-pdfsg3/zookeeper_8h_bb1a0a179f313b2e44ee92369c438a4c.html#bb1a0a179f313b2e44ee92369c438a4c9eabb281ab14c74db3aff9ab456fa7fe> What is the issue here? should I be retry the operation zoo_get operation? Or should I wait for the zoo_set_server to complete (like wait for the connection establishment notification) Thanks, |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 16 weeks, 5 days ago | 0|i308k7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2454 | Limit Connection Count based on User |
New Feature | Open | Minor | Unresolved | Botond Hejj | Botond Hejj | Botond Hejj | 27/Jun/16 05:16 | 09/Oct/16 21:54 | server | 0 | 8 | ZOOKEEPER-2280 | ZooKeeper currently can limit connection count from clients coming from the same ip. It is a great feature to malfunctioning clients DOS-ing the server with many requests. I propose additional safegurads for ZooKeeper. It would be great if optionally connection count could be limited for a specific user or a specific user on an ip. This is great in cases where ZooKeeper ensemble is shared by multiple users and these users share the same client ips. This can be common in container based cloud deployment where external ip of multiple clients can be the same. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 23 weeks, 3 days ago | 0|i304jb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2453 | Cannot compile on ARM: "Error: bad instruction `lock xaddl" |
Bug | Open | Minor | Unresolved | Unassigned | Markus Thies | Markus Thies | 25/Jun/16 12:49 | 28/Jun/16 22:51 | 3.4.5 | c client | 1 | 7 | ZOOKEEPER-1374 | Jessie, Raspberry | It seems that this is a bug equivalent to the issue ZOOKEEPER-1374. make[5]: Entering directory '/home/pi/Downloads/mesos-0.28.2/build/3rdparty/zookeeper-3.4.5/src/c' if /bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -I./include -I./tests -I./generated -DTHREADED -g -O2 -D_GNU_SOURCE -MT libzkmt_la-mt_adaptor.lo -MD -MP -MF ".deps/libzkmt_la-mt_adaptor.Tpo" -c -o libzkmt_la-mt_adaptor.lo `test -f 'src/mt_adaptor.c' || echo './'`src/mt_adaptor.c; \ then mv -f ".deps/libzkmt_la-mt_adaptor.Tpo" ".deps/libzkmt_la-mt_adaptor.Plo"; else rm -f ".deps/libzkmt_la-mt_adaptor.Tpo"; exit 1; fi gcc -DHAVE_CONFIG_H -I. -I. -I. -I./include -I./tests -I./generated -DTHREADED -g -O2 -D_GNU_SOURCE -MT libzkmt_la-mt_adaptor.lo -MD -MP -MF .deps/libzkmt_la-mt_adaptor.Tpo -c src/mt_adaptor.c -fPIC -DPIC -o libzkmt_la-mt_adaptor.o /tmp/ccs0G1lb.s: Assembler messages: /tmp/ccs0G1lb.s:1589: Error: bad instruction `lock xaddl r1,[r0]' Makefile:743: recipe for target 'libzkmt_la-mt_adaptor.lo' failed |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 38 weeks, 1 day ago | 0|i303fr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2452 | Back-port ZOOKEEPER-1460 to 3.4 for IPv6 literal address support. |
Bug | Closed | Critical | Fixed | Abraham Fine | Chris Nauroth | Chris Nauroth | 23/Jun/16 16:17 | 04/Sep/16 01:27 | 11/Aug/16 12:14 | 3.4.9 | quorum | 0 | 4 | ZOOKEEPER-2247, ZOOKEEPER-1460 | Via code inspection, I see that the "server.nnn" configuration key does not support literal IPv6 addresses because the property value is split on ":". In v3.4.3, the problem is in QuorumPeerConfig: {noformat} String parts[] = value.split(":"); InetSocketAddress addr = new InetSocketAddress(parts[0], Integer.parseInt(parts[1])); {noformat} In the current trunk (http://svn.apache.org/viewvc/zookeeper/trunk/src/java/main/org/apache/zookeeper/server/quorum/QuorumPeer.java?view=markup) this code has been refactored into QuorumPeer.QuorumServer, but the bug remains: {noformat} String serverClientParts[] = addressStr.split(";"); String serverParts[] = serverClientParts[0].split(":"); addr = new InetSocketAddress(serverParts[0], Integer.parseInt(serverParts[1])); {noformat} This bug probably affects very few users because most will naturally use a hostname rather than a literal IP address. But given that IPv6 addresses are supported for clients via ZOOKEEPER-667 it seems that server support should be fixed too. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 32 weeks ago |
Reviewed
|
0|i2zzrj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2451 | do not make use of system properties for security configuration |
Bug | Resolved | Major | Duplicate | Unassigned | Sergey Shelukhin | Sergey Shelukhin | 23/Jun/16 15:32 | 23/Jun/16 15:56 | 23/Jun/16 15:56 | 0 | 3 | ZOOKEEPER-2139 | Is there (or could there be) a way to set up security for ZK client that doesn't involve calls like {noformat} System.setProperty(ZooKeeperSaslClient.LOGIN_CONTEXT_NAME_KEY, SASL_LOGIN_CONTEXT_NAME); {noformat}? I was looking at an unrelated security configuration issue and stumbled upon this pattern; we use (at least) 2 ZK connections from the same process, that (for now) use the same config but different context names, one of which is in a library out of our control. Unless I'm missing something with this pattern it seems extremely brittle. Or unless there's an alternative approach already; if there is, hadoop-common and hive don't use it atm, old approach seems prevalent. There should be an approach that is at least slightly more solid, like say public globals... maybe even threadlocals! |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 39 weeks ago | 0|i2zzov: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2450 | Upgrade Netty version due to security vulnerability (CVE-2014-3488) |
Bug | Closed | Critical | Fixed | Michael Han | Michael Han | Michael Han | 22/Jun/16 17:11 | 21/Jul/16 16:18 | 22/Jun/16 17:12 | 3.4.8, 3.5.1, 3.6.0 | 3.4.9, 3.5.2, 3.6.0 | security, server | 0 | 2 | This JIRA recreates ZOOKEEPER-2432 which was deleted as the collateral damage during the spamming fighting effort Apache Infrastructure Team did weeks ago. Recreate the JIRA for the record so external documentations can link back to this JIRA. The SslHandler in Netty before 3.9.2 allows remote attackers to cause a denial of service (infinite loop and CPU consumption) via a crafted SSLv2Hello message [1]. We are using netty 3.7.x in ZK for 3.4/3.5/3.6, which is affected by this vulnerability. [1] http://cve.mitre.org/cgi-bin/cvename.cgi?name=2014-3488 [2] http://netty.io/news/ |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 39 weeks, 1 day ago | 0|i2zxlb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2449 | Refresh the invalid host which got unknownhostexception during connection in StaticHostProvider |
Improvement | Open | Minor | Unresolved | Unassigned | Vishal Khandelwal | Vishal Khandelwal | 20/Jun/16 08:52 | 20/Jun/16 13:07 | 0 | 3 | ZOOKEEPER-2447 | As per the current logic if a host has "UnknownHostException" during zookeeper client object connection and it is never refreshed after that. Incase host comes back, this class won't try to connect it back. Ideally in StaticHostProvider.next we end of the list is reached then all of the host in connection string should be tested and refreshed again. That way incase host comes back and object stays for a longer duration, realtime status can be updated and client can take advantage of that This also benefit the scenario defined https://issues.apache.org/jira/browse/ZOOKEEPER-2447 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 |
Patch
|
3 years, 39 weeks, 3 days ago | 0|i2zqn3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2448 | Expose result of reconfiguration via JMX |
Improvement | Open | Major | Unresolved | Unassigned | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 19/Jun/16 11:16 | 19/Jun/16 11:16 | 3.5.1 | 0 | 2 | Currently, the main way to get the result of a reconfiguration is to look at the server logs. One possible way to expose a reconfiguration command and its result is to do it via JMX. One advantage over doing a 4lw is that in the case we mess up with the client port, we can still see what is going on. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 39 weeks, 4 days ago | 0|i2zpin: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2447 | Zookeeper adds good delay when one of the quorum host is not reachable |
Bug | Patch Available | Major | Unresolved | Vishal Khandelwal | Vishal Khandelwal | Vishal Khandelwal | 15/Jun/16 08:56 | 11/Nov/16 02:30 | 3.4.9 | 3.4.9 | 0 | 7 | ZOOKEEPER-2449 | StaticHostProvider --> resolveAndShuffle method adds all of the address which are valid in the quorum to the list, shuffles them and sends back to client connection class. If after shuffling if first node appear to be the one which is not reachable, Clientcnx.SendThread.run will keep on connecting to the failure till a timeout and the moves to a different node. This adds up random delay in zookeeper connection in case a host is down. Rather we could check if host is reachable in StaticHostProvider and ignore isReachable is false. Same as we do for UnknownHostException Exception. This can tested using following test code by providing a valid host which is not reachable. for quick test comment Collections.shuffle(tmpList, sourceOfRandomness); in StaticHostProvider.resolveAndShuffle {code} @Test public void test() throws Exception { EventsWatcher watcher = new EventsWatcher(); QuorumUtil qu = new QuorumUtil(1); qu.startAll(); ZooKeeper zk = new ZooKeeper("<hostnamet:2181," + qu.getConnString(), 180 * 1000, watcher); watcher.waitForConnected(CONNECTION_TIMEOUT * 5); Assert.assertTrue("connection Established", watcher.isConnected()); zk.close(); } {code} Following fix can be added to StaticHostProvider.resolveAndShuffle {code} if(taddr.isReachable(4000 // can be some value)) { tmpList.add(new InetSocketAddress(taddr, address.getPort())); } {code} |
9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 3 years, 18 weeks, 6 days ago | 0|i2zhtz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2446 | License missing in the pom.xml |
Bug | Resolved | Critical | Duplicate | Unassigned | Sergio Fernández | Sergio Fernández | 08/Jun/16 06:03 | 08/Jun/16 06:32 | 08/Jun/16 06:32 | 3.4.8 | build | 0 | 2 | ZOOKEEPER-2373 | Assembling a {{NOTICE}} file in a project that uses Zookeeper I've realized the {{pom.xml}} does not declare the license, at least in the whole {{3.4.x}} branch, e.g., https://repo1.maven.org/maven2/org/apache/zookeeper/zookeeper/3.4.8/zookeeper-3.4.8.pom | maven | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 41 weeks, 1 day ago | 0|i2z5f3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2445 | Add a new node to cluster, error config file but sucessed |
Bug | Open | Critical | Unresolved | Unassigned | Ahaha | Ahaha | 08/Jun/16 04:56 | 08/Jun/16 10:33 | 3.4.6 | server | 0 | 2 | 86400 | 86400 | 0% | Centos 6.5 JDK7 | I want to add a new node to a test cluster with three node. So I tried with the folling steps: 1. I copied one foller zookeeper directory, and edit clientPort, dataDir , logDir, myid file, and then add a new record server.newId=hostname:newPort:newEPort into zoo.cfg, for test I keep the leader configuration there and remove the other two configuration, I tested ,this failed. So I open the zoo.cfg , copied the lead configuration line just edit the serverId, and this should be a wrong configuration file as the server does not match with the serverid. But strangely , it successed. | 0% | 0% | 86400 | 86400 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 41 weeks, 1 day ago | 0|i2z5an: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2444 | Fix NULL handling for getlogin() call |
Bug | Open | Major | Unresolved | Unassigned | Pawel Rozlach | Pawel Rozlach | 07/Jun/16 07:01 | 09/Jun/16 00:58 | 3.4.8 | c client | 0 | 3 | https://github.com/apache/zookeeper/pull/70 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 41 weeks ago | 0|i2z2xz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2443 | Invalid Memory Access (SEGFAULT) and undefined behaviour in c client |
Bug | Open | Major | Unresolved | Chris Nauroth | Paul Asmuth | Paul Asmuth | 07/Jun/16 05:42 | 15/Jun/16 00:13 | c client | 1 | 3 | ZOOKEEPER-1029 | Hey, I encountered some issues with the zookeepeer c client. The problem starts in the zookeeper_init_internal method. A lot of initialization work is performed here and if any of the initialization routines fails, the code jumps to the "abort" label to perform various cleanup tasks [1]. The conceptual issue is that a bunch of the cleanup code tries to take locks on the zk structure that are only intialized in adaptor_init in line 1181 (at the very end of the zookeeper_init_internal method) [2]. So if we fail before reaching adaptor_init this causes trouble. One specific instance of an invalid memory access that this causes is in free_completions [3]. Here, in line 1651 zoo_lock_auth will fail because it tries to grab an invalid mutex, after which the a_list struct is uninitialized (the linked list next pointer points to random memory) and subsequently the free routine segfaults. An easy way to trigger this bug-path is to pass an invalid hostname, or do anything else that causes the zookeeper_init_internal method to fail before adaptor_init. In my local checkout/codebase, I have added correct initialization for the a_list struct in the free_completions routine, which at least fixes the segfault for now. However this still leaves the issue that the cleanup code tries to grab a lot of invalid locks, which all fail. I think in order to fix this properly, one would need to do a larger refactoring of the code (add another adaptor_preinit routine to the adaptor interface maybe?) and I wasn't sure if that would be appreciated, so I didn't attach a patch for now. If someone wants me to try and clean this up, I would be happy to give it a try. PS: I think this bug was introduced in SVN #1719528, which - as it seems - tried to work around the uninitialized locks problem by adding an int return code to all the lock_xxx functions, allowing them to indicate a failure. The change introduce the invalid memory access since some (always required) init code is only run after the lock was obtained successfully. However, I think there is a much large issue with the change and I think it must be reverted. Trying to lock an uninitialized mutex is undefined behaviour on POSIX and may lead to deadlocks, etc. >> If mutex does not refer to an initialized mutex object, the behavior of pthread_mutex_lock(), pthread_mutex_trylock(), and pthread_mutex_unlock() is undefined. http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_lock.html [1] https://github.com/apache/zookeeper/blob/trunk/src/c/src/zookeeper.c#L1078 [2] https://github.com/apache/zookeeper/blob/trunk/src/c/src/zookeeper.c#L1181 [3] https://github.com/apache/zookeeper/blob/trunk/src/c/src/zookeeper.c#L1651 ------------------ BACKTRACE Program received signal SIGSEGV, Segmentation fault. 0x000000010004f6d5 in free_auth_completion (a_list=0x7fff5fbff048) at /deps/3rdparty/zookeeper/source/src/zookeeper.c:260 260 tmp = tmp->next; #0 0x000000010004f6d5 in free_auth_completion (a_list=0x7fff5fbff048) at /deps/3rdparty/zookeeper/source/src/zookeeper.c:260 #1 0x000000010004f500 in free_completions (zh=0x1003022f0, callCompletion=1, reason=-116) at /deps/3rdparty/zookeeper/source/src/zookeeper.c:1219 #2 0x0000000100057bfd in cleanup_bufs (zh=0x1003022f0, callCompletion=1, rc=-116) at /deps/3rdparty/zookeeper/source/src/zookeeper.c:1227 #3 0x000000010004ee42 in destroy (zh=0x1003022f0) at /deps/3rdparty/zookeeper/source/src/zookeeper.c:393 #4 0x000000010004eaf3 in zookeeper_init (host=0x1006005b0 "xxxinvalidhostname:2181", watcher=0x100007670 <xxx::zk_watch_cb(_zhandle*, int, int, char const*, void*)>, recv_timeout=10000, clientid=0x0, context=0x100600350, flags=0) at /deps/3rdparty/zookeeper/source/src/zookeeper.c:877 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 40 weeks, 1 day ago | 0|i2z2tj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2442 | Socket leak in QuorumCnxManager connectOne |
Bug | Closed | Major | Fixed | Michael Han | Michael Han | Michael Han | 03/Jun/16 19:05 | 17/May/17 23:43 | 03/Aug/16 00:32 | 3.5.1 | 3.5.3, 3.6.0 | quorum, server | 0 | 7 | The function connectOne() in QuorumCnxManager.java sometimes fails to release a socket allocated by Socket(): {code} try { if (LOG.isDebugEnabled()) { LOG.debug("Opening channel to server " + sid); } Socket sock = new Socket(); setSockOpts(sock); sock.connect(self.getView().get(sid).electionAddr, cnxTO); if (LOG.isDebugEnabled()) { LOG.debug("Connected to server " + sid); } initiateConnection(sock, sid); } catch (UnresolvedAddressException e) { // Sun doesn't include the address that causes this // exception to be thrown, also UAE cannot be wrapped cleanly // so we log the exception in order to capture this critical // detail. LOG.warn("Cannot open channel to " + sid + " at election address " + electionAddr, e); throw e; } catch (IOException e) { LOG.warn("Cannot open channel to " + sid + " at election address " + electionAddr, e); } {code} Another place in Listener.run() where the client socket is not explicitly closed. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 21 weeks, 1 day ago | 0|i2yz2v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2441 | C API maps getaddrinfo() transient and permanent failures to the same value |
Bug | Open | Major | Unresolved | Unassigned | Andras Erdei | Andras Erdei | 03/Jun/16 04:37 | 03/Jun/16 04:37 | 0 | 1 | Linux | https://github.com/apache/zookeeper/blob/trunk/src/c/src/zookeeper.c#L560 maps getaddrinfo() return values indicating a transient failure (e.g. EAI_AGAIN) to same value (EINVAL) that zookeeper_init() uses to indicate permanent problems (like empty host spec or invalid port). As a result client code has no way to decide whether it should re-try the initialization or abort (asking for manual intervention). As discussed e.g. in https://issues.apache.org/jira/browse/MESOS-3790 zookeeper should most likely retry on this and other transient failures automagically. Independently, the switch above should be fixed to map EAI_* values to different E* values allowing client code some flexibility in handling and reporting errors deemed permanent by zookeeper. Note that there is a related bug https://issues.apache.org/jira/browse/ZOOKEEPER-1451 -- zookeeper also does not report the problem properly in its own logs, making debugging these problems even harder. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 41 weeks, 6 days ago | 0|i2yxl3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2440 | permanent SESSIONMOVED error after client app reconnects to zookeeper cluster |
Bug | Open | Major | Unresolved | Ryan Zhang | Ryan Zhang | Ryan Zhang | 02/Jun/16 17:01 | 05/Feb/20 07:16 | 3.5.0 | 3.7.0, 3.5.8 | quorum | 0 | 6 | ZOOKEEPER-710 fixed the issue when the request is not a multi request. However, the multi request is handled a little bit differently as the code didn't throw the SESSIONMOVED exception. In addition, the exception is set in the request by the leader so it will be lost in the commit process and by the time the final processor sees it, it will be gone. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
1 year, 17 weeks, 1 day ago | 0|i2ywun: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2439 | The order of asynchronous setACL is not correct. |
Bug | Open | Major | Unresolved | Unassigned | Kazuaki Banzai | Kazuaki Banzai | 01/Jun/16 23:17 | 19/Sep/16 04:05 | 3.4.8, 3.5.1 | 1 | 4 | Linux Ubuntu Mac OS X |
Within a given client connection, the execution of commands on the ZooKeeper server is always ordered, as both synchronous and asynchronous commands are dispatched through queuePacket (directly or indirectly). In other words, Zookeeper guarantees sequential consistency: updates from a client will be applied in the order that they were sent. However, the order of asynchronous setACL is not correct on Ubuntu. When asynchronous setACL is called BEFORE another API is called, asynchronous setACL is applied AFTER another API. For example, if a client calls (1) asynchronous setACL to remove all permissions of node "/" and (2) synchronous create to create node "/a", synchronous create should fail, but it succeeds on Ubuntu. (We can see all permissions of node "/" are removed when the client calls getACL to node "/" after (2), so (1) is applied AFTER (2). If we call getACL between (1) and (2), the synchronous case works correctly but the asynchronous case still produces the bug.) The attached unit test reproduces this scenario. It fails on Linux Ubuntu but succeeds on Mac OS X. If used on a heavily loaded server on Mac OS, the test sometimes fails as well but only rarely. |
acl | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 26 weeks, 3 days ago | 0|i2yv7j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2438 | Create sample program for multi client single JVM scenario. |
Wish | Patch Available | Minor | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 01/Jun/16 11:21 | 01/Jun/16 11:46 | 0 | 1 | Create sample program for multi client single JVM scenario which is handled in ZOOKEEPER-2139. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 42 weeks, 1 day ago | 0|i2ytzz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2437 | Need detailed diagnostics for Zookeeper Connection Issues |
Improvement | Open | Critical | Unresolved | Unassigned | harcor | harcor | 31/May/16 15:43 | 02/Jun/16 17:42 | 3.5.1 | java client | 0 | 3 | Using a zookeeper ensemble with Apache Solr, the client connection (socket) can be disconnected by either Solr or Zookeeper. If the connection fails on the Zookeeper and we are in DEBUG mode then additional diagnostics should be written to the log withe connection exception. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 42 weeks ago | 0|i2yraf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2436 | Inconsistent truncation logic and out of sync logging and comments in recovery code |
Bug | Open | Minor | Unresolved | Unassigned | Ed Rowe | Ed Rowe | 30/May/16 04:39 | 30/May/16 04:40 | 1 | 3 | Consider this scenario: # Ensemble of nodes A, B, C, D, E with A as the leader # Nodes A, B get partitioned from C, D, E # Leader A receives a write _before it detects_ that it has lost its quorum so it logs the write # Nodes C, D, E elect node C as the leader # Partition resolves, and nodes A, B rejoin the C, D, E ensemble with C continuing to lead Depending on whether any updates have occurred in the C, D, E ensemble between steps 4 and 5, the re-joining nodes A,B will either receive a TRUNC or a SNAP. *The problems:* # If updates have occurred in the C,D,E ensemble, SNAP is sent to the re-joining nodes. This occurs because the code in LearnerHandler.queueCommittedProposals() notices that truncation would cross epochs and bails out, leading to a SNAP being sent. A comment in the code says "We cannot send TRUNC that cross epoch boundary. The learner will crash if it is asked to do so. We will send snapshot this those cases." LearnerHandler.syncFollower() then logs an ERROR saying "Unhandled scenario for peer sid: # fall back to use snapshot" and a comment with this code says "This should never happen, but we should fall back to sending snapshot just in case." Presumably since queueCommittedProposals() is intentionally triggering the snapshot logic, this is not an "unhandled scenario" that warrants logging an ERROR nor is it a case that "should never happen". This inconsistency should be cleaned up. It might also be the case that a TRUNC would work fine in this scenario - see #2 below. # If no updates have occurred in the C,D,E ensemble, when nodes A,B rejoin LearnerHandler.syncFollower() goes into the "Newer than commitedLog, send trunc and done" clause and sends them a TRUNC. This seems to work fine. However, this would also seem to be a cross-epoch TRUNC, which per the comment discussed above in #1, is expected to cause a crash in the learner. I haven't found anything special about a TRUNC that crosses epochs that would cause a crash in the learner, and I believe that at the time of the TRUNC (or SNAP), the learner is in the same state in both scenarios. It is certainly the case (pending resolution of ZOOKEEPER-1549) that TRUNC is not able to remove data that has been snapshotted, so perhaps detecting “cross-epoch” is a shortcut for trying to detect that scenario? If the resolution of ZOOKEEPER-1549 does not allow TRUNC through a snapshot (or alternately does not allow a benign TRUNC through a snapshot that may not contain uncommitted data), then this case should probably also be a SNAP. If TRUNC is allowed in this case, then perhaps it should also be allowed for case #1, which would be more performant. *While I certainly could have missed something, it would seem that either both cases should be SNAP or both should be a TRUNC given that the learner is in the same state in both cases*. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 42 weeks, 3 days ago | 0|i2yop3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2435 | miss event when the leader stop |
Bug | Resolved | Minor | Not A Bug | Unassigned | BourneHan | BourneHan | 29/May/16 03:32 | 31/May/16 21:25 | 30/May/16 10:56 | 3.4.6 | build | 0 | 2 | Hi All, In my projects, I use three ZooKeeper server as an ensemble: zk1 as a follower on 192.168.25.221, zk2 as a follower on 192.168.25.222, zk3 as the leader on 192.168.25.223. My two programs using ZooKeepers C client run on 192.168.25.221 and 192.168.25.222. When watched the ZOO_CONNECTED_STATE, my program will use the zookeeper to obtain a lock do the following: 1. Create a ZOO_EPHEMERAL | ZOO_SEQUENCE node under '/Lock/'. 2. Call getChildren( ) on the '/Lock/' node. 3. If the pathname created in step 1 has the lowest sequence number suffix, the program has the lock and do something,then release the lock simply delete the node created in step 1. 4. The program calls exists() with the watch flag set on the lowest sequence number node. 5. if exists( ) returns false, go to step 2. Otherwise, wait for a notification(ZOO_DELETED_EVENT) for the pathname from the previous step before going to step 2. When I stop a follower such as zk1/zk2, everything is ok, my programs on 192.168.25.221 and 192.168.25.222 do its work orderly under the lock's control. When I stop the leader such as zk3(I have restarted zk1/zk2), my program on 192.168.25.221 got the lock and release it normally, and my program on 192.168.25.222 detected existence of the node created by the program on 192.168.25.221, but keep waiting and can't receive the ZOO_DELETED_EVENT notification. Does anyone else see the same problem? PS: 1. The attachment is the log of the zookeeper on 192.168.25.221 and 192.168.25.222 when I stop the leader on 192.168.25.223 2. Actually I have other more programs using ZooKeepers C client run on 192.168.25.221, 192.168.25.222 and 192.168.25.223. 3. The system time on 192.168.25.221 is slower 1 minute and 33 seconds than 192.168.25.222 and 192.168.25.223. so when I stop the leader, it's 2016-05-28 22:33:34 on 192.168.25.221 and 2016-05-28 22:35:07 on 192.168.25.222. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 42 weeks, 1 day ago | 0|i2ynvj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2434 | Error handling issues in truncation code |
Bug | Open | Minor | Unresolved | Unassigned | Ed Rowe | Ed Rowe | 28/May/16 21:56 | 29/May/16 11:39 | server | 0 | 5 | # If FileTxnLog.truncate() is unable to delete a log file, it calls LOG.warn() but otherwise does nothing. I think this should be a fatal error not a logged warning. Otherwise those log files are going be be read later when the DB is reloaded and data that should have been removed will still be present. # Learner.syncWithLeader() expects ZKDatabase.truncateLog() to return false on failure, and if this occurs it calls System.exit(). However, this will never happen because ZKDatabase.truncateLog() never returns false - instead an exception is thrown on failure. ZKDatabase.truncateLog() calls FileTxnSnapLog.truncateLog() which calls FileTxnLog.truncate(), each of which is documented to return false on failure but none of which ever does in practice. TruncateTest.testTruncationNullLog() clearly expects an exception on error in ZKDatabase.truncateLog() so there are conflicting expectations in the codebase. It appears that if Learner.syncWithLeader() encounters an exception, System.exit() will _not_ be called and instead we land in the main run loop where we'll start the whole thing again. So there are two things to deal with: a) whether we want to do system.exit or go back to the main run loop if truncation fails, and b) sort out the return false vs. throw exception discrepancy and make it consistent (and change the docs as needed). I'm happy to propose a patch, but I'd need people with more experience in the codebase to weigh in on the questions above. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 42 weeks, 4 days ago | 0|i2ynnr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2433 | ZooKeeperSaslServer: allow user principals in subject |
Improvement | Closed | Major | Fixed | Andy B | Andy B | Andy B | 23/May/16 05:47 | 21/Jul/16 16:18 | 15/Jun/16 16:32 | 3.5.1 | 3.5.2, 3.6.0 | security | 0 | 8 | 18000 | 18000 | 0% | ZOOKEEPER-1045, ZOOKEEPER-1467, HADOOP-10183 | The _createSaslServer_ function in ZooKeeperSaslServer +handles only service principal names+ (eg. *service_name/{color:blue}machine_name{color}@realm*), though sometimes user/service principal names +without host name+ (eg. *service_name@realm*) are used for authentication. |
0% | 0% | 18000 | 18000 | easyfix | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
3 years, 40 weeks, 1 day ago |
Reviewed
|
0|i2ycxz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2432 | ZOOKEEPER-2431 Document how to use single thread C client API to monitor connection status. |
Sub-task | Open | Major | Unresolved | Michael Han | Michael Han | Michael Han | 22/May/16 12:43 | 05/Feb/20 07:17 | 3.4.8 | 3.7.0, 3.5.8 | c client, documentation | 0 | 1 | ZOOKEEPER-1676 | We need document the correct approach (use watchers combined with zookeeper_interest, select, and zookeeper_process) for monitoring connection events between client and server using single thread C library. See examples https://goo.gl/hql4B1 and https://github.com/fpj/zookeeper-book-example/blob/master/src/main/c/master.c. In particular emphasis that the return code of zookeeper_interest should not be relied for connection monitoring purposes. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 43 weeks, 4 days ago | 0|i2yc8f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2431 | C client documentation improvement |
Improvement | Open | Major | Unresolved | Michael Han | Michael Han | Michael Han | 22/May/16 12:37 | 05/Feb/20 07:17 | 3.4.8 | 3.7.0, 3.5.8 | c client, documentation | 1 | 2 | ZOOKEEPER-2432 | ZOOKEEPER-2090 | The existing documentation of C client in Programmer's Guide is incomplete and there are many TBD sections. We should improve the documentation by completing all the TBD sections and add more sample code to demonstrate C client API usage. | documentation | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 43 weeks, 4 days ago | 0|i2yc87: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2430 | Remove jute.maxbuffer limit packetLen in client side |
Improvement | Open | Major | Unresolved | Yong Zhang | Yong Zhang | Yong Zhang | 21/May/16 04:55 | 18/Jan/17 18:28 | 0 | 2 | jute.maxbuffer can be configured in both client and server side, if we try to getChildren from a parent znode with large number of children znode, client may failed to get because of packetLen is more than jute.maxbuffer configured. even if we can change the value in java system property, but we have to restart the application, and more all data has been in zookeeper client, check the length/size is unnecessary. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 9 weeks, 1 day ago | 0|i2ybgv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2429 | IbmX509 KeyManager and TrustManager algorithm not supported |
Bug | Open | Minor | Unresolved | Saurabh jain | Saurabh Jain | Saurabh Jain | 17/May/16 11:19 | 05/Feb/20 07:16 | 3.5.0 | 3.7.0, 3.5.8 | security, server | 0 | 3 | When connecting from a zookeeper client running in IBM WebSphere Application Server version 8.5.5, with SSL configured in ZooKeeper, the below mentioned exception is observed. org.jboss.netty.channel.ChannelPipelineException: Failed to initialize a pipeline. at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:208) at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182) at org.apache.zookeeper.ClientCnxnSocketNetty.connect(ClientCnxnSocketNetty.java:112) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1130) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1158) Caused by: org.apache.zookeeper.common.X509Exception$SSLContextException: Failed to create KeyManager at org.apache.zookeeper.common.X509Util.createSSLContext(X509Util.java:75) at org.apache.zookeeper.ClientCnxnSocketNetty$ZKClientPipelineFactory.initSSL(ClientCnxnSocketNetty.java:358) at org.apache.zookeeper.ClientCnxnSocketNetty$ZKClientPipelineFactory.getPipeline(ClientCnxnSocketNetty.java:348) at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:206) ... 4 more Caused by: org.apache.zookeeper.common.X509Exception$KeyManagerException: java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not available at org.apache.zookeeper.common.X509Util.createKeyManager(X509Util.java:129) at org.apache.zookeeper.common.X509Util.createSSLContext(X509Util.java:73) ... 7 more Caused by: java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not available at sun.security.jca.GetInstance.getInstance(GetInstance.java:172) at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:9) at org.apache.zookeeper.common.X509Util.createKeyManager(X509Util.java:118) Reason : IBM websphere uses its own jre and supports only IbmX509 keymanager algorithm which is causing an exception when trying to get an key manager instance using SunX509 which is not supported. Currently KeyManager algorithm name (SunX509) is hardcoded in the class X509Util.java. Possible fix: Instead of having algorithm name hardcoded to SunX509 we can fall back to the default algorithm supported by the underlying jre. Instead of having this - KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509"); TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509"); can we have ? KeyManagerFactory kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm()); TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm()); |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 41 weeks, 5 days ago | 0|i2y31r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2422 | Improve health reporting by adding heap stats |
Improvement | Patch Available | Major | Unresolved | Sergey Maslyakov | Sergey Maslyakov | Sergey Maslyakov | 02/May/16 13:41 | 05/Feb/20 07:11 | 3.4.8, 3.5.1, 3.6.0 | 3.7.0, 3.5.8 | 0 | 6 | In order to improve remote monitoring of the ZooKeeper instance using tools like Icinga/NRPE, it is very desirable to expose JVM heap stats via a light-weight interface. The "mntr" 4lw is a good candidate for this. | 9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 45 weeks, 1 day ago | 0|i2x1wf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2421 | testSessionReuse is commented out |
Bug | Open | Major | Unresolved | Prasanth Mathialagan | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 02/May/16 10:56 | 12/Jun/18 00:43 | 0 | 8 | ZOOKEEPER-111 | This test case in SessionTest: {noformat} testSessionReuse {noformat} is commented out. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 40 weeks, 2 days ago | 0|i2x1lr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2420 | Autopurge deletes log file prior to oldest retained snapshot even though restore may need it |
Bug | Resolved | Major | Duplicate | Ed Rowe | Ed Rowe | Ed Rowe | 02/May/16 01:13 | 18/Nov/16 02:39 | 18/Nov/16 02:11 | server | 0 | 7 | ZOOKEEPER-2574 | Autopurge retains all log files whose zxid are >= the zxid of the oldest snapshot file that it is going to retain (in PurgeTxnLog retainNRecentSnapshots()). However, unless there is a log file with the same zxid as the oldest snapshot file being retained (and whether log file and snapshot file zxids are equal is timing dependent), loading the database from snapshots/logs will start with the log file _prior_ to the snapshot's zxid. Thus, to avoid data loss autopurge should retain the log file prior to the oldest retained snapshot as well, unless it verifies that it contains no zxids beyond what the snapshot contains or there is a log file whose zxid == snapshot zxid. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 17 weeks, 6 days ago | 0|i2x0xb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2419 | Zookeeper.log filling up faster due to clients without Auth (KeeperErrorCode = NoAuth) |
Improvement | Open | Blocker | Unresolved | Unassigned | Karthik Shivanna | Karthik Shivanna | 01/May/16 07:12 | 16/Jun/19 06:59 | 3.4.6 | 0 | 2 | I am seeing that the /var/log/zookeeper/zookeeper.out file is getting filled up faster than usual. It has grown upto 5 GB. When I further saw the out file, lot of them are [INFO] as follows: 2016-03-22 02:03:42,621 - INFO [ProcessThread(sid:4 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x4534413d1f70001 type:create cxid:0x71e0aa99 zxid:0x5f00e3de69 txntype:-1 reqpath:n/a Error Path:null Error:KeeperErrorCode = NoAuth The log4j properties file was modified to change the parameter for logging from INFO, CONSOLE to INFO, ROLLINGFILE. But I would like to understand where the above INFO is coming from. Any help is greatly appreciated. Thanks Zookeeper version: 3.4.6-249--1 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 |
Important
|
3 years, 46 weeks ago | 0|i2x0j3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2418 | txnlog diff sync can skip sending some transactions to followers |
Bug | Resolved | Critical | Fixed | Brian Nixon | Nicholas Wolchko | Nicholas Wolchko | 28/Apr/16 12:57 | 03/Jul/19 21:17 | 03/Jul/19 14:54 | 3.5.1 | 3.6.0 | server | 0 | 5 | 604800 | 598800 | 6000 | 0% | If the leader is having disk issues so that its on disk txnlog is behind the in memory commit log, it will send a DIFF that is missing the transactions in between the two. Example: There are 5 hosts in the cluster. 1 is the leader. 5 is disconnected. We commit up to zxid 1000. At zxid 450, the leader's disk stalls, but we still commit transactions because 2,3,4 are up and acking writes. At zxid 1000, the txnlog on the leader has 1-450 and the commit log has 500-1000. Then host 5 regains its connection to the cluster and syncs with the leader. It will receive a DIFF containing zxids 1-450 and 500-1000. This is because queueCommittedProposals in the LearnerHandler just queues everything within its zxid range. It doesn't give an error if there is a gap between peerLastZxid and the iterator it is queueing from. |
0% | 0% | 6000 | 598800 | 604800 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 37 weeks ago | 0|i2wwxr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2416 | Remove Java System property usage from ZooKeeper server code |
Improvement | Open | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 26/Apr/16 04:02 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | server | 0 | 3 | ZOOKEEPER-2139 | Many ZooKeeper properties are used as Java System properties in the ZooKeeper code. Some example: {code} public static int getSnapCount() { String sc = System.getProperty("zookeeper.snapCount"); {code} {code} public int getGlobalOutstandingLimit() { String sc = System.getProperty("zookeeper.globalOutstandingLimit"); {code} Using ZooKeeper properties as Java system properties causes following problems # Can not create two or more ZooKeeper Server in a single JVM with different properties for testing # The properties initialization and validation is very much mixed with actual business logic which should not be the case. ZOOKEEPER-2139 removed the ZooKeeper client side Java System properties so as part of this jira handling only ZooKeeper server side Java System properties to be removed. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 47 weeks, 2 days ago | 0|i2wqjj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2415 | SessionTest is using Thread deprecated API. |
Test | Closed | Major | Fixed | Andor Molnar | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 24/Apr/16 10:30 | 04/Oct/19 10:55 | 24/Apr/18 19:49 | 3.4.8, 3.5.1, 3.6.0 | 3.5.4, 3.6.0, 3.4.13 | tests | 0 | 4 | ZOOKEEPER-3026 | The test class is using calls such as {{Thread.suspend}} and {{Thread.resume}}. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 47 weeks, 1 day ago |
Reviewed
|
0|i2wnxz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2414 | c-client aborted when operate's path is invalid in zoo_amulti |
Bug | Patch Available | Major | Unresolved | Meyer Kizner | Tianyi Zhang | Tianyi Zhang | 21/Apr/16 00:07 | 14/Dec/19 06:08 | 3.4.6, 3.4.8 | 3.7.0 | c client | 0 | 5 | ZOOKEEPER-2267 | code like this: {code} zoo_op_t ops[2]; zoo_op_result_t results[2]; zoo_create_op_init(ops, "test", "1", 1, &ZOO_OPEN_ACL_UNSAFE, 0, NULL, 0); zoo_create_op_init(ops+1, "/test/1", "2", 1, &ZOO_OPEN_ACL_UNSAFE, 0, NULL, 0); zoo_multi(zkhandle, 2, ops, results); {code} The ops->path is invalid, and it will cause double free in the line 3136 of zookeeper.c. {code} for (index=0; index < count; index++) { const zoo_op_t *op = ops+index; zoo_op_result_t *result = results+index; completion_list_t *entry = NULL; struct MultiHeader mh = { STRUCT_INITIALIZER(type, op->type), STRUCT_INITIALIZER(done, 0), STRUCT_INITIALIZER(err, -1) }; rc = rc < 0 ? rc : serialize_MultiHeader(oa, "multiheader", &mh); switch(op->type) { case ZOO_CREATE_OP: { struct CreateRequest req; rc = rc < 0 ? rc : CreateRequest_init(zh, &req, op->create_op.path, op->create_op.data, op->create_op.datalen, op->create_op.acl, op->create_op.flags); rc = rc < 0 ? rc : serialize_CreateRequest(oa, "req", &req); result->value = op->create_op.buf; result->valuelen = op->create_op.buflen; enter_critical(zh); entry = create_completion_entry(h.xid, COMPLETION_STRING, op_result_string_completion, result, 0, 0); leave_critical(zh); --> free_duplicate_path(req.path, op->create_op.path); break; } {code} This problem will happen when the 'rc' of last op is less than 0(maybe ZBADARGUMENTS or ZINVALIDSTATE). In my case, rc of op[0] is ZBADARGUMENTS, and the req.path of the ‘free_duplicate_path’ is still 'test' when execute op[1]. I‘m confused about why not break the for-loop when the 'rc' is less than 0? |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 20 weeks, 6 days ago | 0|i2wf6f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2413 | ContainerManager doesn't close the Timer it creates when stop() is called |
Bug | Closed | Major | Fixed | Jordan Zimmerman | Jordan Zimmerman | Jordan Zimmerman | 13/Apr/16 14:35 | 21/Jul/16 16:18 | 24/Apr/16 17:22 | 3.5.1 | 3.5.2, 3.6.0 | server | 0 | 5 | ZOOKEEPER-2163 | ContainerManager creates a Timer object. It's stop() method cancel's the running task but doesn't close the Timer itself. This ends up leaking a Thread (internal to the Timer). | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 47 weeks, 4 days ago | Thanks, Jordan. Branch 3.5: Committed revision 1740737. Trunk: Committed revision 1740738. |
0|i2w2nz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2412 | leader zk out of memory, and leader db lastZxid is not update when process set data. |
Bug | Resolved | Major | Duplicate | Unassigned | Yongcheng Liu | Yongcheng Liu | 12/Apr/16 09:32 | 10/May/16 09:57 | 10/May/16 09:57 | 3.4.6 | 3.4.8 | 0 | 4 | 3 zk, the follower (myid=6) server is injected into the network delay fault for test, so this follower(id=6) data is behind leader, leader need send SNAP to this follower every time. | the time begin to have problem is 20:59:47 (1) out of memory log: 2016-03-24 23:01:21,355 [myid:4] - INFO [LearnerHandler-/192.168.50.26:35112:LearnerHandler@330] - Follower sid: 6 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@5cf112b0 2016-03-24 23:01:21,355 [myid:4] - INFO [LearnerHandler-/192.168.50.26:35112:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 23:01:21,355 [myid:4] - WARN [LearnerHandler-/192.168.50.26:35112:LearnerHandler@446] - Unhandled proposal scenario 2016-03-24 23:01:21,355 [myid:4] - INFO [LearnerHandler-/192.168.50.26:35112:LearnerHandler@462] - Sending SNAP 2016-03-24 23:01:21,893 [myid:4] - INFO [NIOServerCxn.Factory:/192.168.50.24:10540:NIOServerCnxnFactory@207] - Current connection (from /192.168.50.22 Cnxns = 4; totalCnxns = 15) 2016-03-24 23:01:22,625 [myid:4] - WARN [NIOServerCxn.Factory:/192.168.50.24:10540:ZooKeeperServer@832] - Connection request from old client /192.168.50.22:49695; will be dropped if server is in r-o mode 2016-03-24 23:01:23,283 [myid:4] - INFO [QuorumPeer[myid=4]/192.168.50.24:10540:Leader@493] - Shutting down 2016-03-24 23:01:24,102 [myid:4] - INFO [QuorumPeer[myid=4]/192.168.50.24:10540:Leader@499] - Shutdown called 2016-03-24 23:01:24,040 [myid:4] - INFO [SessionTracker:ZooKeeperServer@347] - Expiring session 0x453a6dc5b7a007e, timeout of 3500ms exceeded Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "NIOServerCxn.Factory:/192.168.50.24:10540" 2016-03-24 23:01:25,001 [myid:4] - WARN [QuorumPeer[myid=4]/192.168.50.24:10540:QuorumPeer@827] - QuorumPeer main thread exited Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "LearnerHandler-/192.168.50.26:35112" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "QuorumPeer[myid=4]/192.168.50.24:10540" 2016-03-24 23:01:24,227 [myid:4] - ERROR [LearnerHandler-/192.168.50.26:35112:NIOServerCnxnFactory$1@44] - Thread LearnerHandler Socket[addr=/192.168.50.26,port=35112,localport=10550] tickOfNextAckDeadline:38310 synced?:true queuedPacketLength:7910 died 2016-03-24 23:01:28,492 [myid:4] - INFO [main:QuorumPeerMain@93] - Exiting normally (2) this leader is very strange, db lastzxid not update, log as follow (after grep "Synchronizing with"), we can see max commit zxid from leader is not update any more. From the beginning of 20:59:47, leader lastZxid not update. 2016-03-24 20:56:10,266 [myid:4] - INFO [LearnerHandler-/192.168.50.26:48439:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x90000333f minCommittedLog=0x90000314b peerLastZxid=0x90000280a 2016-03-24 20:57:59,203 [myid:4] - INFO [LearnerHandler-/192.168.50.26:46956:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x900003398 minCommittedLog=0x9000031a4 peerLastZxid=0x90000280a 2016-03-24 20:59:47,928 [myid:4] - INFO [LearnerHandler-/192.168.50.26:20601:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:01:26,475 [myid:4] - INFO [LearnerHandler-/192.168.50.26:29622:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:03:16,552 [myid:4] - INFO [LearnerHandler-/192.168.50.26:35717:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:03:44,427 [myid:4] - INFO [LearnerHandler-/192.168.50.26:48197:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:05:01,125 [myid:4] - INFO [LearnerHandler-/192.168.50.26:57826:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:06:54,187 [myid:4] - INFO [LearnerHandler-/192.168.50.26:30137:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:07:42,780 [myid:4] - INFO [LearnerHandler-/192.168.50.26:24255:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:08:41,279 [myid:4] - INFO [LearnerHandler-/192.168.50.26:40909:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:10:23,137 [myid:4] - INFO [LearnerHandler-/192.168.50.26:64166:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:11:50,003 [myid:4] - INFO [LearnerHandler-/192.168.50.26:56070:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:12:11,956 [myid:4] - INFO [LearnerHandler-/192.168.50.26:41423:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:13:08,286 [myid:4] - INFO [LearnerHandler-/192.168.50.26:26757:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:13:59,960 [myid:4] - INFO [LearnerHandler-/192.168.50.26:62785:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:15:41,103 [myid:4] - INFO [LearnerHandler-/192.168.50.26:53141:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:16:11,125 [myid:4] - INFO [LearnerHandler-/192.168.50.26:39551:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:17:25,541 [myid:4] - INFO [LearnerHandler-/192.168.50.26:24638:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:18:25,039 [myid:4] - INFO [LearnerHandler-/192.168.50.26:54723:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a 2016-03-24 21:19:04,148 [myid:4] - INFO [LearnerHandler-/192.168.50.26:37450:LearnerHandler@385] - Synchronizing with Follower sid: 6 maxCommittedLog=0x9000033a1 minCommittedLog=0x9000031ad peerLastZxid=0x90000280a (3) we can see leader Snapshotting to the same file in different time, but receive different TxnZxid(0x900003b70 and 0x9000049dd), this show leader has not been updated lastZxid in db. Snapshotting 1: 2016-03-24 21:33:27,214 [myid:4] - INFO [Snapshot Thread:FileTxnSnapLog@253] - Snapshotting: 0x9000033a1 to /opt/dsware/agent/zk/data/version-2/snapshot.9000033a1 2016-03-24 21:33:27,333 [myid:4] - INFO [SyncThread:4:FileTxnLog@199] - Creating new log file: log.900003b70 Snapshotting 2: 2016-03-24 22:41:26,601 [myid:4] - INFO [Snapshot Thread:FileTxnSnapLog@253] - Snapshotting: 0x9000033a1 to /opt/dsware/agent/zk/data/version-2/snapshot.9000033a1 2016-03-24 22:41:26,662 [myid:4] - INFO [SyncThread:4:FileTxnLog@199] - Creating new log file: log.9000049dd (4) finally, this node(leader server) zxid is behind zk c client, log as follow: 2016-03-24 23:00:53,712 [myid:4] - WARN [NIOServerCxn.Factory:/192.168.50.24:10540:ZooKeeperServer@832] - Connection request from old client /192.168.50.23:35043; will be dropped if server is in r-o mode 2016-03-24 23:00:53,713 [myid:4] - INFO [NIOServerCxn.Factory:/192.168.50.24:10540:ZooKeeperServer@851] - Refusing session request for client /192.168.50.23:35043 as it has seen zxid 0x900004e1f our last zxid is 0x9000033a1 client must try another server |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 45 weeks, 2 days ago | the same as https://issues.apache.org/jira/browse/ZOOKEEPER-2201 | 0|i2vzxz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2411 | Bug | Resolved | Major | Invalid | Unassigned | André Cruz | André Cruz | 11/Apr/16 05:03 | 11/Apr/16 05:04 | 11/Apr/16 05:04 | 0 | 1 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 49 weeks, 3 days ago | 0|i2vx0f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2410 | add time unit to 'ELECTION TOOK' log.info message |
Improvement | Closed | Trivial | Fixed | Christine Poerschke | Christine Poerschke | Christine Poerschke | 01/Apr/16 14:20 | 21/Jul/16 16:18 | 01/Jun/16 19:01 | 3.5.2, 3.6.0 | leaderElection, quorum, server | 0 | 4 | A three-line change to add the time unit to 'ELECTION TOOK' log.info message to help people not yet so familiar with zookeeper interpret log files. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 |
Patch
|
3 years, 42 weeks, 1 day ago |
Reviewed
|
0|i2vj3j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2409 | zookeeper recipes lock's c implement: function child_floor bug |
Bug | Open | Major | Unresolved | Unassigned | europelee | europelee | 01/Apr/16 02:59 | 01/Apr/16 02:59 | 3.4.8 | c client | 1 | 2 | 3600 | 3600 | 0% | zookeeper cluster with multi servers(not standalone mode), and multi clients connect to different zookeeper server. | 1. start zookeeper cluster with multi servers 2. multi clients connect to different zookeeper server at the same time 3. clients use lock from zookeeper recipes lock's C implement. for example: Client A create a node like x-025373e3a9960050-0000000067 and Client B create a node like x-015373e3a9960050-0000000068; A is a lock owner now, then kill A, as expect, B should become owner, but in fact B not. Because in zoo_lock.c, function zkr_lock_operation call child_floor to monitoring a pre node, but child_floor has bug, it caused B not check its prenode A. B function child_floor just simply strcmp "x-025373e3a9960050-0000000067" with own node "x-015373e3a9960050-0000000068", it should only strcmp "0000000067" with "0000000068", not include session info. besides, it is better that using binary search than travelling every node for looking for a pre node when there exists many nodes. fix: static char* child_floor(char **sorted_data, int len, char *element) { char* ret = NULL; int begin = 0; int end = len-1; int index = 0; while (begin <= end) { index = (begin+end)/2; int iCmpRet = strcmp(strrchr(sorted_data[index], '-')+1, strrchr(element, '-')+1); if (iCmpRet < 0) { begin = index + 1; } else { if (iCmpRet == 0) { if (index - 1 >= 0) { ret = sorted_data[index-1]; } break; } else { end = index - 1; } } } return ret; } |
0% | 0% | 3600 | 3600 | easyfix | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 50 weeks, 6 days ago | 0|i2vi27: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2408 | zoo_awget() ctx memory does not be freed if callback watcher was not invoked |
Bug | Open | Major | Unresolved | Unassigned | Kevin | Kevin | 31/Mar/16 22:44 | 31/Mar/16 22:44 | 3.4.6 | c client | 0 | 1 | centos 7 | I lunch a client, then watch some node, call zoo_awget() to watch data, when the client exit, if the node data doesn't change, the callback 'watcher' won't be invoked. and the memory of watcherCtx is not freed ZOOAPI int zoo_awget(zhandle_t *zh, const char *path, watcher_fn watcher, void* watcherCtx, data_completion_t completion, const void *data); I use valgrind to check and the result show the memory lost. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 50 weeks, 6 days ago | 0|i2vhtj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2407 | EventThread in ClientCnxn can't be closed when SendThread exits because of auth failed during reconnection |
Bug | Open | Major | Unresolved | sunhaitao | sunhaitao | sunhaitao | 30/Mar/16 07:21 | 05/Feb/20 07:16 | 3.5.1 | 3.7.0, 3.5.8 | 0 | 4 | ZOOKEEPER-3059 | EventThread in ClientCnxn can't be closed when SendThread exits because of auth failed during reconnection. for send thread if it is in authfailed state, the send thread exits,but the event thread is still running. observation: use jstack tho check the thread running they find the send thread no longer exists but event thread is still threre even when we call zookeeper.close(),the eventthread is still there. Stack trace: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:514) |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 17 weeks, 1 day ago | 0|i2ve8n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2406 | /etc/zookeeper isn't used for ZOOCFGDIR |
Bug | Open | Minor | Unresolved | Unassigned | Mark Elrod | Mark Elrod | 29/Mar/16 15:35 | 29/Mar/16 15:35 | 3.5.1 | 0 | 2 | The comment in zkEnv.sh indicates that /etc/zookeeper should be an option for the ZOOCFGDIR but the code beneath it does not look to see if it exists: {noformat} # We use ZOOCFGDIR if defined, # otherwise we use /etc/zookeeper # or the conf directory that is # a sibling of this script's directory ZOOBINDIR="${ZOOBINDIR:-/usr/bin}" ZOOKEEPER_PREFIX="${ZOOBINDIR}/.." if [ "x$ZOOCFGDIR" = "x" ] then if [ -e "${ZOOKEEPER_PREFIX}/conf" ]; then ZOOCFGDIR="$ZOOBINDIR/../conf" else ZOOCFGDIR="$ZOOBINDIR/../etc/zookeeper" fi fi {noformat} Should this be something like: {noformat} if [ "x$ZOOCFGDIR" = "x" ] then if [ -e "/etc/zookeeper" ]; then ZOOCFGDIR="/etc/zookeeper" elif [ -e "${ZOOKEEPER_PREFIX}/conf" ]; then ZOOCFGDIR="$ZOOBINDIR/../conf" else ZOOCFGDIR="$ZOOBINDIR/../etc/zookeeper" fi fi {noformat} I am not sure if ZOOBINDIR/../etc/zookeeper is supposed to be an option or a typo but in the default setup ZOOBINDIR/../conf exists so even if it were changed to /etc/zookeeper it would never try to use it. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 51 weeks, 2 days ago | 0|i2vd2n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2405 | getTGT() in Login.java mishandles confidential information |
Bug | Closed | Blocker | Fixed | Michael Han | Patrick D. Hunt | Patrick D. Hunt | 26/Mar/16 15:04 | 21/Jul/16 16:18 | 25/May/16 16:47 | 3.4.8, 3.5.1, 3.6.0 | 3.4.9, 3.5.2, 3.6.0 | kerberos, security, server | 0 | 5 | We're logging the kerberos ticket when in debug mode, probably not the best idea. This was identified as a "critical" issue by Fortify. {noformat} for(KerberosTicket ticket: tickets) { KerberosPrincipal server = ticket.getServer(); if (server.getName().equals("krbtgt/" + server.getRealm() + "@" + server.getRealm())) { LOG.debug("Found tgt " + ticket + "."); return ticket; } } {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 3 years, 43 weeks, 1 day ago |
Reviewed
|
0|i2v96v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2404 | Sections to be completed in Programmer's guide document. |
Task | Open | Trivial | Unresolved | Unassigned | Ibrahim | Ibrahim | 24/Mar/16 11:11 | 24/Mar/16 11:19 | 4.0.0 | documentation | 1 | 2 | 7257600 | 7257600 | 0% | There are some sections in Programmer's guide document need to be completed, such as Read Operations, Write operations and Connecting to Zookeeper. |
0% | 0% | 7257600 | 7257600 | documentation | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years ago | 0|i2v5vb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2403 | Consistently handle and document boolean properties (true/false as well as yes/no) |
Wish | Open | Trivial | Unresolved | Ryan P | Ryan P | Ryan P | 23/Mar/16 16:21 | 11/May/16 11:44 | 0 | 3 | Currently zookeeper.skipACL is evaluated to be either yes or no. This is less than intuitive most developers would expect this to except true or false. https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/server/PrepRequestProcessor.java#L96-Lundefined |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 45 weeks, 1 day ago | 0|i2v473: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2402 | Document client side properties |
Improvement | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 23/Mar/16 08:09 | 29/Sep/16 17:42 | 23/May/16 17:34 | 3.5.2, 3.6.0 | documentation | 0 | 5 | ZOOKEEPER-2397, ZOOKEEPER-1295 | There are many ZooKeeper java client properties which are not documented. Bellow client properties are not documented in Admin Guide # zookeeper.sasl.client.username # zookeeper.sasl.clientconfig # zookeeper.sasl.client # zookeeper.server.realm # zookeeper.disableAutoWatchReset Bellow two client properties are documented in Admin Guide but these are documented in server configuration section # zookeeper.client.secure # zookeeper.clientCnxnSocket |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 3 years, 43 weeks, 3 days ago | 0|i2v35z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2401 | Document leader.nodelay and follower.nodelay ZooKeeper server properties |
Bug | Open | Minor | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 23/Mar/16 07:40 | 23/Mar/16 07:40 | documentation | 0 | 1 | Following properties used in the zookeeper server, also configurable, but not documented in admin guide. # leader.nodelay # follower.nodelay |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 1 day ago | 0|i2v353: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2400 | ZooKeeper not starting: Follower is ahead of the leader |
Bug | Resolved | Major | Not A Problem | Unassigned | Andrey | Andrey | 23/Mar/16 05:00 | 23/Mar/16 17:31 | 23/Mar/16 17:31 | 3.4.6 | quorum | 0 | 2 | Steps to reproduce: # Select deprecated algorithm in zoo.cfg: {code}electionAlg=0{code} # Start zookeeper cluster: A(index 3),B(index 1),C(index 2) nodes # Stop A node. # Make some change to zk data. i.e. re-create ephemeral node. Make sure currentEpoch increased in B and C nodes. # currentEpoch/acceptedEpoch in node A less than B/C epoch # Stop node B. Zookeeper cluster is not available # Start node A. In A's node logs: {code} LEADING [quorum.QuorumPeer] [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:15523] LEADING - LEADER ELECTION TOOK - 1458721180995 [quorum.Leader] Follower sid: 2 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@3a888c1 java.io.IOException: Follower is ahead of the leader, leader summary: 10 (current epoch), 42949672964 (last zxid) at org.apache.zookeeper.server.quorum.Leader.waitForEpochAck(Leader.java:894) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:365) ... Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@5001b9f5 ... java.lang.InterruptedException: Timeout while waiting for epoch to be acked by quorum at org.apache.zookeeper.server.quorum.Leader.waitForEpochAck(Leader.java:915) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:394) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:799) {code} The logs above will be printed indefinitely and cluster won't recover. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 1 day ago | 0|i2v2uf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2399 | Partial Initial Patches to update Zookeeper trunk to Netty 4.x API |
Improvement | Resolved | Major | Fixed | Unassigned | William L Thomson Jr | William L Thomson Jr | 22/Mar/16 18:51 | 22/Jul/19 00:27 | 16/Jul/19 16:38 | 1 | 5 | These are initial patches to update Zookeeper to Netty 4.x API. They are not complete, as I am not familiar with Zookeeper or Netty. I just used Netty documents that detailed the difference in the API to make the changes. I believe I am 80-90% of the way there, but need someone familiar with Zookeeper and/or Netty to finish the last part and make sure it actually works and I did not fubar Zookeeper :) | 9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 34 weeks, 3 days ago | 0|i2v22n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2398 | Config options should not be only available via system property |
Improvement | Open | Minor | Unresolved | Unassigned | Jason Rosenberg | Jason Rosenberg | 22/Mar/16 09:11 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | 0 | 2 | ZOOKEEPER-2394 | Some config options (such as enabling readonly mode) are only settable via a system property. This feels clunky, and makes it less seamless for testing, or for apps which embed a ZooKeeper inside a java container, etc. I ran into this issue specifically in the case of creating unit tests to test read-only mode client side behavior. In this case, I want to run multiple QuorumPeer's in the same jvm, and have some of them enabled for read-only and some not enabled. This is not possible with the current System.setProperty approach. In general, I question the need for using system properties for configuration, since it makes embedding a server within a dependency injection framework more difficult, and is in general less easy to integrate into generic deployment systems. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 2 days ago | 0|i2v0vz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2397 | ZOOKEEPER-2314 Undocumented SASL properties |
Sub-task | Open | Major | Unresolved | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 21/Mar/16 18:59 | 22/Jun/18 00:49 | 3.4.8, 3.5.1, 3.4.11 | documentation | 0 | 5 | ZOOKEEPER-2402 | There are a number of properties spread across the code that do not appear in the docs. For example, zookeeper.allowSaslFailedClients isn't documented afaict. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 24 weeks, 3 days ago | 0|i2uzq7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2396 | ZOOKEEPER-2314 Login object in ZooKeeperSaslClient is static |
Sub-task | Closed | Major | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 21/Mar/16 18:58 | 21/Jul/16 16:18 | 07/May/16 11:51 | 3.4.9, 3.5.2, 3.6.0 | documentation | 0 | 4 | ZOOKEEPER-2330, ZOOKEEPER-2139 | The login object in ZooKeeperSaslClient is static, which means that if you try to create another client for tests, the login object will be the first one you've set for all runs. I've experienced this with 3.4.6. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 45 weeks, 5 days ago | 0|i2uzpz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2395 | allow ant command line control of junit test jvm args |
Improvement | Open | Major | Unresolved | Patrick D. Hunt | Patrick D. Hunt | Patrick D. Hunt | 21/Mar/16 18:48 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | build, tests | 0 | 2 | ZOOKEEPER-2664 | We're seeing some failing jobs (see below) and the speculation is that it might be due to ipv6 vs ipv4 usage. It would be nice to turn on "prefer ipv4" in the jvm but there is no easy way to do that. I'll propose that we add a variable to ant that's passed through to the jvm. ---- This is very odd. It failed 2 of the last three times it was run on H9 with the following: 2016-03-20 06:06:18,480 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@74] - TEST METHOD FAILED testBindByAddress java.net.SocketException: No such device at java.net.NetworkInterface.isLoopback0(Native Method) at java.net.NetworkInterface.isLoopback(NetworkInterface.java:339) at org.apache.zookeeper.test.ClientPortBindTest.testBindByAddress(ClientPortBindTest.java:61) https://builds.apache.org/job/ZooKeeper_branch34/buildTimeTrend Why would it pass one of the times though if there is no loopback device on the host? That seems very odd! |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 8 weeks ago | 0|i2uzov: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2394 | Many ZooKeeper properties are not configurable in zoo.cfg |
Improvement | Patch Available | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 21/Mar/16 05:38 | 05/Feb/20 07:11 | 3.7.0, 3.5.8 | server | 0 | 2 | ZOOKEEPER-2195, ZOOKEEPER-2398 | Many ZooKeeper properties are not configurable in zoo.cfg. If configured in zoo.cfg, {{QuorumPeerConfig}} parse logic will pre append "zookeeper" which is not the same property used in code. So a property with a name {{abc.xyz}} becomes {{zookeeper.abc.xyz}} Bellow are properties which can not configured in zoo.cfg, can only be configured using java system properties. # follower.nodelay # leader.nodelay # readonlymode.enabled # jute.maxbuffer # znode.container.checkIntervalMs # znode.container.maxPerMinute This jira targets to make these properties configurable in zoo.cfg as well. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 39 weeks, 2 days ago | 0|i2uy3z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2393 | Revert run-time dependency on log4j and slf4j-log4j12 |
Bug | Closed | Blocker | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 17/Mar/16 21:39 | 21/Jul/16 16:18 | 19/Mar/16 17:56 | 3.5.2, 3.6.0 | server | 0 | 4 | ZOOKEEPER-2342, ZOOKEEPER-1371 | Zookeeper run-time dependency on log4j and slf4j-log4j12 was removed as part of ZOOKEEPER-1371 jira work. Following things were done as part of ZOOKEEPER-1371 # Removed direct log4j API use from the code, instead used slf4j-api # Changed log4j and slf4j-log4j12 run time dependency to test time dependency # Upgraded log4j, slf4j-log4j12 and slf4j-api versions. Here is the component wise version change #* (zookeeper)ivy.xml log4j: 1.2.15 -->1.7.5 #* src\contrib\loggraph\ivy.xml slf4j-api: 1.6.1 -->1.7.5 slf4j-log4j12: 1.6.1 -->1.7.5 log4j: 1.2.15 -->1.7.5 #* src\contrib\rest\ivy.xml slf4j-api: 1.6.1 -->1.7.5 slf4j-log4j12: 1.6.1 -->1.7.5 log4j: 1.2.15 -->1.7.5 #* src\contrib\zooinspector\ivy.xml slf4j-api: 1.6.1 -->1.7.5 slf4j-log4j12: 1.6.1 -->1.7.5 log4j: 1.2.15 -->1.7.5 The major problem with ZOOKEEPER-1371 change is that it removed run time dependency. For more detail refer ZOOKEEPER-2342 discussion Now as part of this jira revert back only run time dependency, #2, on log4j and slf4j-log4j12. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 5 days ago |
Reviewed
|
0|i2uv3b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2392 | Update netty to 3.7.1.Final |
Improvement | Closed | Minor | Fixed | Hendy Irawan | Hendy Irawan | Hendy Irawan | 16/Mar/16 10:50 | 21/Jul/16 16:18 | 18/Mar/16 13:25 | 3.4.6, 3.5.1 | 3.5.2, 3.6.0 | build | 0 | 2 | HADOOP-12927, HADOOP-12928 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
4 years, 6 days ago |
Reviewed
|
https://github.com/apache/zookeeper/pull/57 | 0|i2urof: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2391 | setMin/MaxSessionTimeout of ZookeeperServer are implemented in quite a weak way |
Bug | Patch Available | Minor | Unresolved | Kazuaki Banzai | Kazuaki Banzai | Kazuaki Banzai | 16/Mar/16 08:34 | 12/Dec/16 03:39 | server | 0 | 3 | setMin/MaxSessionTimeout of ZookeeperServer are implemented in quite a weak way. * -1 restores the default, but this is not documented. * values < -1 are permitted but make no sense. * min > max is permitted but makes not sense. |
9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 3 years, 14 weeks, 3 days ago | 0|i2urhz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2390 | pom file needs license information |
Improvement | Resolved | Blocker | Duplicate | Patrick D. Hunt | Ben McCann | Ben McCann | 15/Mar/16 21:46 | 17/Mar/16 02:30 | 17/Mar/16 02:30 | 3.5.1, 3.5.2, 3.6.0, 4.0.0 | build | 0 | 1 | ZOOKEEPER-2373 | Can you add a license section to the pom file? org.apache.zookeeper:zookeeper:3.5.1-alpha does specify one currently Automated tools utilize the license in the pom the ensure we're clients are using appropriately licensed software <licenses> <license> <name>Apache License, Version 2.0</name> <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url> <distribution>repo</distribution> </license> </licenses> |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 1 week ago | 0|i2uqsf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2389 | read-only observer doesn't load transaction log when transitioning to read-only |
Bug | Open | Major | Unresolved | Unassigned | Jason Rosenberg | Jason Rosenberg | 14/Mar/16 20:02 | 22/Jun/18 00:49 | 3.4.8, 3.4.11 | 0 | 1 | I have rediscovered an issue, that was apparently posted a while back (link below). It seems that if I configure an Observer node to be enabled for read-only mode, with syncEnabled = true, it properly syncs its transaction log with the quorum. However, if I shut down the quorum participants, and the Observer automatically transitions to read-only mode, it does not load the saved transaction log, and thus rejects any client connection with a zxid > 0. But If I restart the Observer node, it reloads it's persisted transaction log and serves read-only requests at the latest zxid. Is this the correct behavior? Things run fine if instead of an observer, I do the same with a read-only participant. In this case, it transitions without issue to a read-only server, and serves the current transaction log. It seems to me this issue renders read-only observers completely useless. What am I missing here? I'm seeing this with 3.4.8 It seems this was discovered and reported a long time ago here: http://grokbase.com/t/zookeeper/user/14c16b1d22/issue-with-zxid-during-observer-failover-to-read-only |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 1 week, 3 days ago | 0|i2un47: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2388 | Unit tests failing on Solaris |
Bug | Closed | Blocker | Fixed | Mohammad Arshad | Patrick D. Hunt | Patrick D. Hunt | 14/Mar/16 17:06 | 21/Jul/16 16:18 | 17/Mar/16 02:16 | 3.5.2 | 3.5.2, 3.6.0 | tests | 0 | 4 | The same two tests are failing consistently on Solaris in 3.5/trunk (I don't see similar failures in 3.4, jenkins is mostly green there) org.apache.zookeeper.server.quorum.LocalPeerBeanTest.testClientAddress org.apache.zookeeper.server.quorum.QuorumPeerTest.testQuorumPeerListendOnSpecifiedClientIP |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 1 week ago |
Reviewed
|
0|i2umqf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2387 | Attempt to gracefully stop the |
Improvement | Resolved | Major | Won't Fix | Biju Nair | Biju Nair | Biju Nair | 10/Mar/16 06:44 | 20/Mar/16 13:38 | 20/Mar/16 07:09 | 0 | 1 | To stop ZooKeeper service, {{kill}} is issued against the ZK process. It will be good to gracefully stop all the subcomponents. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 4 days ago | 0|i2ugnz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2386 | Cannot achieve quorum when middle server (in a q of 3) is unreacable |
Bug | Open | Major | Unresolved | Unassigned | Enis Soztutar | Enis Soztutar | 09/Mar/16 22:11 | 14/Sep/16 06:40 | 1 | 4 | Recently, we've observed a curious case where a quorum was not reached for days in a cluster of 3 nodes (zk0, zk1, zk2) and the middle node zk1 is unreachable from network. The leader election happens, and both zk0 and zk2 starts the vote. Then each server sends notifications to every other server including itself. The problem is that, zk1 vm is unavailable, so when we are trying to open up a socket to connect to that server with socket timeout of 5 seconds, it delays the notification processing of the vote sent from zk2 to zk2 (itself). The vote eventually comes after 5 sec, but by that time, the follower (zk0) already converted to the follower state. On the follower state, the follower try to connect to leader 5 times with 1 second timeout (5 sec in total). Since the leader does not start its peer port for 5 seconds after the follower starts, the follower always times out connecting to the leader. This cycle is repeating for hours / days even after restarting the servers several times. I believe this is related to the default timeouts (5 sec socket timeout) and follower to leader connection timeout (5 tries with 1 second timeout). Only after setting the {{zookeeper.cnxTimeout}} to 1 second, the quorum was operating. More logs coming shortly. zoo.cfg: {code} server.3=zk2-hostname:2889:3889 server.2=zk1-hostname:2889:3889 server.1=zk0-hostname:2889:3889 {code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 27 weeks, 1 day ago | 0|i2ufz3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2385 | Zookeeper trunk build is failing on windows |
Bug | Closed | Blocker | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 09/Mar/16 14:45 | 21/Jul/16 16:18 | 15/Mar/16 12:17 | 3.4.9, 3.5.2, 3.6.0 | build | 0 | 5 | ZOOKEEPER-2378 | command {{ant tar}} fails with following error {code} D:\gitHome\zookeeperTrunk\build.xml:722: The following error occurred while executing this line: D:\gitHome\zookeeperTrunk\src\contrib\build.xml:47: The following error occurred while executing this line: D:\gitHome\zookeeperTrunk\src\contrib\build-contrib.xml:207: Unable to delete file D:\gitHome\zookeeperTrunk\src\java\lib\ivy-2.4.0.jar {code} This is happening only on windows. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 1 week, 2 days ago |
Reviewed
|
0|i2uf4f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2384 | Support atomic increment / decrement of znode value |
Improvement | Resolved | Major | Later | Unassigned | Ted Yu | Ted Yu | 09/Mar/16 12:26 | 11/Nov/16 19:05 | 11/Nov/16 19:05 | 0 | 6 | Use case is to store reference count (integer type) in znode. It is desirable to provide support for atomic increment / decrement of the znode value. Suggestion from Flavio: {quote} you can read the znode, keep the version of the znode, update the value, write back conditionally. The condition for the setData operation to succeed is that the version is the same that it read {quote} While the above is feasible, developer has to implement retry logic him/herself. It is not easy to combine increment / decrement with other operations using multi. |
atomic | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 19 weeks, 5 days ago | 0|i2ueu7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2383 | Startup race in ZooKeeperServer |
Bug | Closed | Blocker | Fixed | Rakesh Radhakrishnan | Steven Rowe | Steven Rowe | 08/Mar/16 12:00 | 31/Mar/17 05:01 | 03/Jan/17 12:54 | 3.4.8 | 3.4.10, 3.5.3, 3.6.0 | jmx, server | 2 | 17 | SOLR-8724, ARIES-1684, ZOOKEEPER-2655, SOLR-9386 | In attempting to upgrade Solr's ZooKeeper dependency from 3.4.6 to 3.4.8 (SOLR-8724) I ran into test failures where attempts to create a node in a newly started standalone ZooKeeperServer were failing because of an assertion in MBeanRegistry. ZooKeeperServer.startup() first sets up its request processor chain then registers itself in JMX, but if a connection comes in before the server's JMX registration happens, registration of the connection will fail because it trips the assertion that (effectively) its parent (the server) has already registered itself. {code:java|title=ZooKeeperServer.java} public synchronized void startup() { if (sessionTracker == null) { createSessionTracker(); } startSessionTracker(); setupRequestProcessors(); registerJMX(); state = State.RUNNING; notifyAll(); } {code} {code:java|title=MBeanRegistry.java} public void register(ZKMBeanInfo bean, ZKMBeanInfo parent) throws JMException { assert bean != null; String path = null; if (parent != null) { path = mapBean2Path.get(parent); assert path != null; } {code} This problem appears to be new with ZK 3.4.8 - AFAIK Solr never had this issue with ZK 3.4.6. |
9223372036854775807 | No Perforce job exists for this issue. | 9 | 9223372036854775807 | 3 years, 8 weeks, 3 days ago | 0|i2ucfr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2382 | Make fsync.warningthresholdms property configurable through zoo.cfg |
Improvement | Resolved | Minor | Not A Problem | Biju Nair | Biju Nair | Biju Nair | 05/Mar/16 06:43 | 20/Mar/16 00:33 | 17/Mar/16 13:02 | 0 | 1 | Currently {{fsync.warningthresholdms}} property can be set as a Java system property. But it would help if this property can be made configurable through {{zoo.cfg}} so that configuration management tools can leverage it. Also the Java system property name should be standardized (refer ZOOKEEPER-2316) so that the property is inline with the standard followed by other properties. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 4 days ago | 0|i2u8jj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2381 | ReconfigTest.testPortChange has been failing intermittently |
Test | Resolved | Major | Duplicate | Unassigned | Akihiro Suda | Akihiro Suda | 04/Mar/16 03:35 | 04/Mar/16 04:09 | 04/Mar/16 04:09 | server, tests | 0 | 1 | ZOOKEEPER-2137 | ReconfigTest.testPortChange has been failing intermittently: - Feb 12, 2016: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3045/ - Feb 16, 2015: http://permalink.gmane.org/gmane.comp.java.zookeeper.devel/25521 I can also locally reproduce with 40d0804c (Mar 3, 2016). The error message looks like as if it is a linearizability violation for sync+read operation, but I'm still not sure. stack trace: {code} junit.framework.AssertionFailedError: expected:<test[1]> but was:<test[0]> at org.apache.zookeeper.test.ReconfigTest.testNormalOperation(ReconfigTest.java:150) at org.apache.zookeeper.test.ReconfigTest.testPortChange(ReconfigTest.java:598) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:79) {code} test code: https://github.com/apache/zookeeper/blob/2cc945442e218d0757983ac42e2a5d86a94ccb30/src/java/test/org/apache/zookeeper/test/ReconfigTest.java#L150 {code:java} for (int j = 0; j < 30; j++) { try { .. String data = "test" + j; writer.setData("/test", data.getBytes(), -1); reader.sync("/", null, null); byte[] res = reader.getData("/test", null, new Stat()); Assert.assertEquals(data, new String(res)); break; } catch (KeeperException.ConnectionLossException e) { if (j < 29) { Thread.sleep(1000); } else { // test fails if we still can't connect to the quorum after // 30 seconds. Assert.fail("client could not connect to reestablished quorum: giving up after 30+ seconds."); } } } {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 2 weeks, 6 days ago | 0|i2u5av: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2380 | Deadlock between leader shutdown and forwarding ACK to the leader |
Bug | Closed | Blocker | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 04/Mar/16 00:32 | 21/Jul/16 16:18 | 23/Jun/16 17:20 | 3.5.2, 3.6.0 | server | 1 | 8 | Zookeeper enters into deadlock while shutting down itself, thus making zookeeper service unavailable as deadlocked server is a leader. Here is the thread dump: {code} "QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled)" #25 prio=5 os_prio=0 tid=0x00007fbc502a6800 nid=0x834 in Object.wait() [0x00007fbc4d9a8000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1245) - locked < 0x00000000feb78000> (a org.apache.zookeeper.server.SyncRequestProcessor) at java.lang.Thread.join(Thread.java:1319) at org.apache.zookeeper.server.SyncRequestProcessor.shutdown(SyncRequestProcessor.java:196) at org.apache.zookeeper.server.quorum.ProposalRequestProcessor.shutdown(ProposalRequestProcessor.java:90) at org.apache.zookeeper.server.PrepRequestProcessor.shutdown(PrepRequestProcessor.java:1016) at org.apache.zookeeper.server.quorum.LeaderRequestProcessor.shutdown(LeaderRequestProcessor.java:78) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:561) - locked < 0x00000000feb61e20> (a org.apache.zookeeper.server.quorum.LeaderZooKeeperServer) at org.apache.zookeeper.server.quorum.QuorumZooKeeperServer.shutdown(QuorumZooKeeperServer.java:169) - locked < 0x00000000feb61e20> (a org.apache.zookeeper.server.quorum.LeaderZooKeeperServer) at org.apache.zookeeper.server.quorum.LeaderZooKeeperServer.shutdown(LeaderZooKeeperServer.java:102) - locked < 0x00000000feb61e20> (a org.apache.zookeeper.server.quorum.LeaderZooKeeperServer) at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:637) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:590) - locked < 0x00000000feb781a0> (a org.apache.zookeeper.server.quorum.Leader) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1108) "SyncThread:1" #46 prio=5 os_prio=0 tid=0x00007fbc5848f000 nid=0x867 waiting for monitor entry [0x00007fbc4ca90000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.zookeeper.server.quorum.Leader.processAck(Leader.java:784) - waiting to lock <0x00000000feb781a0> (a org.apache.zookeeper.server.quorum.Leader) at org.apache.zookeeper.server.quorum.AckRequestProcessor.processRequest(AckRequestProcessor.java:46) at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:183) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:113) {code} Leader.lead() calls shutdown() from the synchronized block, it acquired lock on Leader.java instance {code} while (true) { synchronized (this) { long start = Time.currentElapsedTime(); ..... {code} In the shutdown flow SyncThread is trying to acquire lock on the same Leader.java instance. Leader thread acquired lock and waiting for SyncThread shutdown. SyncThread waiting for the lock to complete its shutdown. This is how ZooKeeper entered into deadlock |
9223372036854775807 | No Perforce job exists for this issue. | 8 | 9223372036854775807 | 3 years, 39 weeks ago |
Reviewed
|
0|i2u55b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2379 | recent commit broke findbugs qabot check |
Bug | Closed | Blocker | Fixed | Rakesh Radhakrishnan | Patrick D. Hunt | Patrick D. Hunt | 03/Mar/16 10:22 | 21/Jul/16 16:18 | 04/Mar/16 19:07 | 3.4.9, 3.5.2, 3.6.0 | 3.4.9, 3.5.2, 3.6.0 | build | 0 | 5 | ZOOKEEPER-2375 | A recent commit seems to have broken findbugs, looks like it's in ZooKeeperSaslClient see: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/3075//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 2 weeks, 5 days ago |
Reviewed
|
0|i2u3qf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2378 | upgrade ivy to recent version |
Improvement | Closed | Critical | Fixed | Patrick D. Hunt | Patrick D. Hunt | Patrick D. Hunt | 03/Mar/16 02:03 | 21/Jul/16 16:18 | 03/Mar/16 12:55 | 3.4.8, 3.5.1, 3.6.0 | 3.4.9, 3.5.2, 3.6.0 | build | 0 | 2 | ZOOKEEPER-2385, ZOOKEEPER-2373 | 2.4.0 is the current version. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago |
Reviewed
|
0|i2u2sn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2377 | zkServer.sh should resolve canonical path from symlinks |
Improvement | Patch Available | Minor | Unresolved | Siddhartha | Siddhartha | Siddhartha | 25/Feb/16 22:09 | 03/Mar/16 01:54 | 3.4.8 | scripts | 0 | 4 | Centos 6 | If zkServer.sh is started from a symlink, it is not able to correctly source the other scripts because it looks in the wrong path. Attached patch fixes this by first resolving absolute path to the script. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 |
Patch
|
4 years, 3 weeks ago | 0|i2tsi7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2376 | Server fails to start if there is a zero-length TxnLog file present in the log directory |
Bug | Resolved | Major | Duplicate | Unassigned | Dmitry Ryabkov | Dmitry Ryabkov | 25/Feb/16 14:45 | 25/Feb/16 15:39 | 25/Feb/16 15:39 | 3.4.6 | server | 0 | 3 | ZOOKEEPER-2332 | Windows | If there is an empty TxnLog file in the log file folder, ZooKeeper server fails to start. This is the exception it logs: 2015-11-02 07:41:10.479 -0600 (,,,) main : ERROR org.apache.zookeeper.server.ZooKeeperServerMain - Unexpected exception, exiting abnormally java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:576) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:595) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:561) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:643) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:272) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:399) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:122) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:113) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) Zero-length log file can be created if FileTxnLog.append() crashes after it creates FileOutputStream but before it serializes and flushes the header. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 4 weeks ago | 0|i2trs7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2375 | Prevent multiple initialization of login object in each ZooKeeperSaslClient instance |
Bug | Closed | Blocker | Fixed | yuemeng | yuemeng | yuemeng | 25/Feb/16 04:18 | 21/Jul/16 16:18 | 02/Mar/16 01:58 | 3.4.6, 3.4.8, 3.5.0, 3.5.1 | 3.4.9, 3.5.2, 3.6.0 | java client | 0 | 8 | ZOOKEEPER-2379 | If there are exist many ZooKeeperSaslClient instance in one process,each ZooKeeperSaslClient instance will be call synchronize method( createSaslClient),But each ZooKeeperSaslClient instance will be lock the current object(that is say ,the synchronize only for lock it's own object) ,but many instances can access the static variable login,the synchronize can't prevent other threads access the static login object,it will be cause more than one ZooKeeperSaslClient instances use the same login object,and login.startThreadIfNeeded() will be called more than one times for same login object。 it wll cause problem: ERROR | [Executor task launch worker-1-SendThread(fi1:24002)] | Exception while trying to create SASL client: java.lang.IllegalThreadStateException | org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslClient(ZooKeeperSaslClient.java:305) |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 3 weeks, 1 day ago | 0|i2tqi7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2374 | Can not telnet 2181 port on aws ec2 server |
Bug | Resolved | Blocker | Cannot Reproduce | Unassigned | zhupengfei | zhupengfei | 24/Feb/16 07:28 | 17/Mar/16 10:32 | 17/Mar/16 10:32 | 3.4.6 | server | 0 | 2 | This is the second time I faced the problem on ec2, my activemq stomp port have the same problem, but tcp message port works fine. I have checked zookeeper.out, no error log found. And aws technical support tell it maybe caused by zookeeper. OS Type: Amazon Linux AMI Network Test Result: -bash-4.1$ netstat | grep 2181 -bash-4.1$ telnet localhost 2181 Trying 127.0.0.1... ^C -bash-4.1$ netstat -tunpl|grep 2181 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 :::2181 :::* LISTEN 17923/java -bash-4.1$ netstat -an |grep 2181 tcp 0 1 172.12.10.152:60171 172.12.10.152:2181 SYN_SENT tcp 0 0 :::2181 :::* LISTEN tcp 0 1 ::ffff:127.0.0.1:36032 ::ffff:127.0.0.1:2181 SYN_SENT |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 1 week ago | 0|i2than: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2373 | Licenses section missing from pom file |
Improvement | Closed | Blocker | Fixed | Patrick D. Hunt | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 23/Feb/16 19:02 | 21/Jul/16 16:18 | 03/Mar/16 12:58 | 3.4.9, 3.5.2, 3.6.0 | 0 | 3 | ZOOKEEPER-2446, ZOOKEEPER-2390, ZOOKEEPER-2378 | The pom file here: https://repo1.maven.org/maven2/org/apache/zookeeper/zookeeper/3.4.8/zookeeper-3.4.8.pom should have a section like this: {noformat} <licenses> <license> <name>The Apache Software License, Version 2.0</name> <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url> <distribution>repo</distribution> </license> </licenses> {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago |
Reviewed
|
0|i2t8y7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2372 | Add ability to specify user under which zookeeper process should be started using zkServer.sh |
Improvement | Open | Minor | Unresolved | Unassigned | Siddhartha | Siddhartha | 23/Feb/16 10:18 | 03/Mar/16 02:13 | 3.4.8 | scripts | 0 | 1 | Linux Centos 6 Java 1.7 |
Currently the zkServer.sh script will start zookeeper as the user invoking the script. It would be good to add the ability to specify the user (maybe in a $USER variable in conf/zookeeper-env.sh) under which the zookeeper process should be run, so that any user invoking the script do not accidentally start it as their user (esp. as root). Addition of this feature would make zkServer.sh the only script required to manage zookeeper process. Thanks |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i2t7yf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2371 | zkServer.sh status does not work if JMX Port is enabled |
Bug | Resolved | Minor | Not A Bug | Unassigned | Siddhartha | Siddhartha | 23/Feb/16 10:13 | 25/Feb/16 20:42 | 25/Feb/16 20:42 | 3.4.8 | scripts | 0 | 1 | Linux Centos 6 Java 1.7 |
If I try to execute 'bin/zkServer.sh status' while having '-Dcom.sun.management.jmxremote.port=9011' in $JVMFLAGS, zookeeper quits with "Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 9011". EIther some other means of getting status should be used, or some way of not setting JMX variables in this case should be added. Thanks |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 3 weeks, 6 days ago | 0|i2t7xj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2370 | Can't access Znodes after adding ACL with SASL |
Bug | Resolved | Major | Not A Problem | Unassigned | Chao Sun | Chao Sun | 23/Feb/16 02:19 | 24/Mar/17 12:09 | 23/Feb/16 06:47 | 3.4.5 | java client | 0 | 3 | (My apology if this is not a bug.) I'm trying to use a ZK client which has successfully authenticated with a secure ZK server using principal {{me/hostname@EXAMPLE.COM}}. However, the following simple commands failed: {code} [zk: hostname(CONNECTED) 0] create /zk-test "1" Created /zk-test [zk: hostname(CONNECTED) 1] setAcl /zk-test sasl:me/hostname@EXAMPLE.COM:cdrwa cZxid = 0x3e3b ctime = Mon Feb 22 23:10:36 PST 2016 mZxid = 0x3e3b mtime = Mon Feb 22 23:10:36 PST 2016 pZxid = 0x3e3b cversion = 0 dataVersion = 0 aclVersion = 1 ephemeralOwner = 0x0 dataLength = 3 numChildren = 0 [zk: hostname(CONNECTED) 2] getAcl /zk-test 'sasl,'me/hostname@EXAMPLE.COM : cdrwa [zk: hostname(CONNECTED) 3] ls /zk-test Authentication is not valid : /zk-test [zk: hostname(CONNECTED) 4] create /zk-test/c "2" Authentication is not valid : /zk-test/c {code} I wonder what I did wrong here, or is this behavior intentional? how can I delete the znodes? Thanks. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 51 weeks, 6 days ago | 0|i2t78f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2369 | Flushing DataOutputStream before calling toByteArray on the underlying ByteArrayOutputStream |
Bug | Open | Minor | Unresolved | emopers | emopers | emopers | 18/Feb/16 04:13 | 03/Mar/16 19:14 | 0 | 1 | In ./src/java/main/org/apache/zookeeper/server/ZooKeeperServer.java {code} ByteArrayOutputStream baos = new ByteArrayOutputStream(); BinaryOutputArchive bos = BinaryOutputArchive.getArchive(baos); bos.writeInt(-1, "len"); rsp.serialize(bos, "connect"); if (!cnxn.isOldClient) { bos.writeBool( this instanceof ReadOnlyZooKeeperServer, "readOnly"); } baos.close(); ByteBuffer bb = ByteBuffer.wrap(baos.toByteArray()); {code} BinaryOutputArchive internally uses DataOutputStream as its stream, and when a DataOutputStream instance wraps an underlying ByteArrayOutputStream instance, it is recommended to flush or close the DataOutputStream before invoking the underlying instances's toByteArray() . Also, it is a good practice to call flush/close explicitly as mentioned for example http://stackoverflow.com/questions/2984538/how-to-use-bytearrayoutputstream-and-dataoutputstream-simultaneously-java. Moreover, "baos.close()" at second last line is not required as it is no-op according to [javadoc|http://docs.oracle.com/javase/7/docs/api/java/io/ByteArrayOutputStream.html] {quote} Closing a ByteArrayOutputStream has no effect. The methods in this class can be called after the stream has been closed without generating an IOException. {quote} The patch is to add flush method on "bos" before calling toByteArray on "baos". Similar behavior is also present in the following files: ./src/java/main/org/apache/zookeeper/ClientCnxn.java ./src/java/main/org/apache/zookeeper/server/ZKDatabase.java ./src/java/main/org/apache/zookeeper/server/persistence/Util.java ./src/java/main/org/apache/zookeeper/server/NIOServerCnxn.java Let me know if this looks good. I can provide patch. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 |
Patch
|
4 years, 3 weeks ago | 0|i2szs7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2368 | Client watches are not disconnected on close |
Improvement | Closed | Major | Fixed | Timothy James Ward | Timothy James Ward | Timothy James Ward | 15/Feb/16 04:28 | 20/May/19 13:50 | 20/Jun/18 09:23 | 3.4.0, 3.5.0 | 3.6.0, 3.5.5 | 0 | 5 | 0 | 13200 | CURATOR-377 | If I have a ZooKeeper client connected to an ensemble then obviously I can register watches. If the client is disconnected (for example by a failing ensemble member) then I get a disconnection event for all of my watches. If, on the other hand, my client is closed then I *do not* get a disconnection event. This asymmetry makes it really hard to clear up properly when using the asynchronous API, as there is no way to "fail" data reads/updates when the client is closed. I believe that the correct behaviour should be for all watchers to receive a disconnection event when the client is closed. The watchers can then respond as appropriate, and can differentiate between a "server disconnect" and a "client disconnect" by checking the ZooKeeper#getState() method. This would not be a breaking behaviour change as Watchers are already required to handle disconnection events. |
100% | 100% | 13200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 39 weeks, 1 day ago | 0|i2sudz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2367 | Unable to establish quorum when hostnames are not resolvable between all of the nodes |
Bug | Resolved | Critical | Duplicate | Unassigned | Timothy James Ward | Timothy James Ward | 12/Feb/16 13:52 | 15/Feb/16 06:59 | 15/Feb/16 06:59 | 3.5.0 | quorum | 0 | 1 | ZOOKEEPER-2171 | If I have a set of three machines, all of which have locally defined hostnames A, B and C (i.e. B and C cannot look up A by name). I am unable to control the DNS setup, and I don't want to manually reimplement DNS using entries in the hosts file. A is on IP 192.168.1.16 B is on IP 192.168.1.17 C is on IP 192.168.1.18 All of my ZK configuration uses literal IP addresses (no hostnames anywhere), however I still see a hostname appearing in the leader log (in this case the leader was C): java.net.UnknownHostException: B at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:369) at org.apache.zookeeper.server.quorum.QuorumCnxManager.receiveConnection(QuorumCnxManager.java:291) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:558) This is caused by the initiateConnection method of QuorumCnxManager, which contains the line: self.getElectionAddress().getHostName() The use of getHostName() forces a reverse DNS lookup, which I do not want. The code should use getHostString() instead, which will use the actual data from config, and avoid unresolvable hosts being sent over the wire. This will mean that node C attempts to connect to 192.168.1.17, not "B". |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 5 weeks, 3 days ago | 0|i2ss9z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2366 | Reconfiguration of client port causes a socket leak |
Bug | Closed | Blocker | Fixed | Flavio Paiva Junqueira | Timothy James Ward | Timothy James Ward | 12/Feb/16 11:59 | 21/Jul/16 16:18 | 23/Jun/16 17:46 | 3.5.0 | 3.5.2, 3.6.0 | quorum | 0 | 8 | The NIOServerCnxnFactory reconfigure method can leak server sockets, and hence make ports unusable until the JVM restarts: The first line of the method takes a reference to the current ServerSocketChannel and then the next line replaces it. The subsequent interactions with the server socket can fail (for example if the reconfiguration tries to bind to an in-use port). If they fail *before* the call to oldSS.close() then oldSS is *never* closed. This holds that port open forever, and prevents the user from rolling back to the previous port! The code from reconfigure is shown below: ServerSocketChannel oldSS = ss; try { this.ss = ServerSocketChannel.open(); ss.socket().setReuseAddress(true); LOG.info("binding to port " + addr); ss.socket().bind(addr); ss.configureBlocking(false); acceptThread.setReconfiguring(); oldSS.close(); acceptThread.wakeupSelector(); try { acceptThread.join(); } catch (InterruptedException e) { LOG.error("Error joining old acceptThread when reconfiguring client port " + e.getMessage()); } acceptThread = new AcceptThread(ss, addr, selectorThreads); acceptThread.start(); } catch(IOException e) { LOG.error("Error reconfiguring client port to " + addr + " " + e.getMessage()); } |
9223372036854775807 | No Perforce job exists for this issue. | 8 | 9223372036854775807 | 3 years, 39 weeks ago |
Reviewed
|
0|i2ss4v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2365 | JAAS configuration section error is confusing |
Bug | Open | Trivial | Unresolved | Biju Nair | Dan Fitch | Dan Fitch | 10/Feb/16 17:24 | 05/Feb/20 07:16 | 3.4.6 | 3.7.0, 3.5.8 | java client | 1 | 6 | ZOOKEEPER-2345 | Ubuntu x86_64 openjdk-7-jre | I have zookeeper running normally just fine in a 3-server cluster. Then I try to configure zookeeper to use Kerberos, following docs in the Solr wiki here: https://cwiki.apache.org/confluence/display/solr/Kerberos+Authentication+Plugin I can't even get to the fun Kerberos errors. When I start with {{JVMFLAGS="-Djava.security.auth.login.config=/opt/zookeeper/jaas-server.conf"}} and this jaas-server.conf: {code} Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab=/keytabs/vdev-solr-01.keytab storeKey=true doNotPrompt=true useTicketCache=false debug=true principal="HTTP/<snip>"; } {code} I get this in the log: {code} 2016-02-10 16:16:51,327 [myid:1] - ERROR [main:ServerCnxnFactory@195] - No JAAS configuration section named 'Server' was foundin '/opt/zookeeper/jaas-server.conf'. 2016-02-10 16:16:51,328 [myid:1] - ERROR [main:QuorumPeerMain@89] - Unexpected exception, exiting abnormally java.io.IOException: No JAAS configuration section named 'Server' was foundin '/opt/zookeeper/jaas-server.conf'. at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:196) at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:87) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:130) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) {code} (Note the "foundin" typo.) I get the exact same error if the jaas-server.conf file exists, or does not. So later I found that the Solr wiki was wrong and lost the double quotes around the keytab value. It would be nice if Zookeeper spewed a more useful message when it can't parse the configuration. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 1 year, 3 weeks, 2 days ago | 0|i2sp3r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2364 | "ant docs" fails on branch-3.5 due to missing releasenotes.xml. |
Bug | Closed | Blocker | Fixed | Patrick D. Hunt | Chris Nauroth | Chris Nauroth | 08/Feb/16 16:36 | 16/Aug/16 17:44 | 21/Mar/16 16:46 | 3.5.2 | 3.5.2, 3.6.0 | build, documentation | 0 | 3 | ZOOKEEPER-2514 | "ant docs" is failing on branch-3.5. (Both trunk and branch-3.4 are fine.) The root cause appears to be a missing file on branch-3.5: src/docs/src/documentation/content/xdocs/releasenotes.xml. This causes Forrest to report a failure due to broken hyperlinks targeting releasenotes.html. |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 4 years, 3 days ago |
Reviewed
|
0|i2skxz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2363 | DatadirCleanupManager never created by ZookeeperServerMain |
Bug | Open | Major | Unresolved | Unassigned | David Foregger | David Foregger | 05/Feb/16 12:09 | 03/Aug/16 01:51 | 3.4.0 | 0 | 3 | h4. Background ZOOKEEPER-1107 introduced a DatadirCleanupManager to automatically purge snapshots. This can be configured using autopurge.snapRetainCount and autopurge.purgeInterval. This is documented [here|http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html#Ongoing+Data+Directory+Cleanup] and [there|http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html#sc_advancedConfiguration]. h4. Symptoms Autopurging does not work when running a standalone ZooKeeperServer. The DatadirCleanupManager is started by the QuorumPeerMain, but there is no similar setup ZooKeeperServerMain. ServerConfig does not hold autopurge properties. h4. Expected Behavior Starting a standalone zookeeper server should enable autopurging with the same behavior as a quorum server. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 33 weeks, 1 day ago | 0|i2shfr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2362 | ZooKeeper multi / transaction allows partial read |
Bug | Open | Critical | Unresolved | Atri Sharma | Whitney Sorenson | Whitney Sorenson | 05/Feb/16 10:32 | 21/Nov/18 20:09 | 3.4.6 | server | 1 | 9 | In this thread http://mail-archives.apache.org/mod_mbox/zookeeper-user/201602.mbox/%3CCAPbqGzicBkLLyVDm7RFM20z0y3X1v1P-C9-1%3D%3D1DDqRDTzdOmQ%40mail.gmail.com%3E , I discussed an issue I've now seen in multiple environments: In a multi (using Curator), I write 2 new nodes. At some point, I issue 2 reads for these new nodes. In one read, I see one of the new nodes. In a subsequent read, I fail to see the other new node: 1. Starting state : { /foo = <does not exist>, /bar = <does not exist> } 2. In a multi, write: { /foo = A, /bar = B} 3. Read /foo as A 4. Read /bar as <does not exist> #3 and #4 are issued 100% sequentially. It is not known at what point during #2, #3 starts. Note: the reads are getChildren() calls. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 17 weeks ago | 0|i2shbz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2361 | Revisit 'VisibleForTesting' phrase used to indicate a member or method visible for testing |
Improvement | Open | Minor | Unresolved | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 04/Feb/16 13:02 | 04/Feb/16 14:23 | 0 | 2 | ZooKeeper uses {{// VisibleForTesting}} comment to indicate a member or method which is visible for unit testing. The idea of this jira is to discuss better ways to convey the message more clear and implement the same. One idea could use annotations, needs to introduce {{@VisibleForTesting}} For example, [ContainerManager.java#L134|https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/server/ContainerManager.java#L134], [PurgeTxnLog.java#L78|https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/server/PurgeTxnLog.java#L78], [ZooKeeper.java#L1011|https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/ZooKeeper.java#L1011] etc. {code} ZooKeeper.java // VisibleForTesting public Testable getTestable() { return new ZooKeeperTestable(this, cnxn); } {code} {code} PurgeTxnLog.java // VisibleForTesting static void retainNRecentSnapshots(FileTxnSnapLog txnLog, List<File> snaps) { {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 7 weeks ago | 0|i2sfrj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2360 | Update commons collections version used by tests/releaseaudit |
Bug | Closed | Blocker | Fixed | Patrick D. Hunt | Patrick D. Hunt | Patrick D. Hunt | 04/Feb/16 12:33 | 21/Jul/16 16:18 | 04/Feb/16 19:33 | 3.4.7, 3.5.1 | 3.4.8, 3.5.2 | build | 0 | 4 | I don't believe this affects us from a security perspective directly, however it's something we should clean up in our next release. Afaict the only commons we use for shipping/production code is commons-cli. Our two release branches, 3.4 and 3.5, neither of them use commons-collections. I looked at the binary release artifact and it doesn't include the commons collections jar. We do have a test that uses CollectionsUtils, but no shipping code. I downloaded our 3.4 and 3.5 artifacts, this is all I see: phunt:~/Downloads/zd/5/zookeeper-3.5.1-alpha$ grep -R "org.apache.commons.collections" . ./src/java/test/org/apache/zookeeper/RemoveWatchesTest.java:import org.apache.commons.collections.CollectionUtils; phunt:~/Downloads/zd/5/zookeeper-3.5.1-alpha$ Also in our ivy file we have <dependency org="org.apache.rat" name="apache-rat-tasks" rev="0.10" conf="releaseaudit->default"/> <dependency org="commons-lang" name="commons-lang" rev="2.6" conf="releaseaudit->default"/> <dependency org="commons-collections" name="commons-collections" rev="3.2.1" conf="releaseaudit->default"/> So commons-collections is pulled in - but only for the release audit, which is something we do as a build verification activity but not part of the product itself. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 6 weeks, 6 days ago |
Reviewed
|
0|i2sfo7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2359 | ZooKeeper client has unnecessary logs for watcher removal errors |
Improvement | Resolved | Major | Fixed | Jordan Zimmerman | Jordan Zimmerman | Jordan Zimmerman | 04/Feb/16 08:30 | 27/May/19 04:17 | 01/Jun/17 12:38 | 3.5.1 | 3.5.4, 3.6.0 | java client | 8 | 11 | ClientCnxn.java logs errors during watcher removal: LOG.error("Failed to find watcher!", nwe); LOG.error("Exception when removing watcher", ke); An error code/exception is generated so the logs are noisy and unnecessary. If the client handles the error there's still a log message. This is different than other APIs. These logs should be removed. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 42 weeks, 3 days ago | 0|i2sf8n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2358 | NettyServerCnxn leaks watches upon close |
Bug | Open | Major | Unresolved | Ian Dimayuga | Ian Dimayuga | Ian Dimayuga | 27/Jan/16 21:35 | 05/Feb/20 07:16 | 3.4.7, 3.5.1 | 3.7.0, 3.5.8 | 0 | 6 | ZOOKEEPER-2527, ZOOKEEPER-2530, ZOOKEEPER-2509 | NettyServerCnxn.close() neglects to call zkServer.removeCnxn the way NIOServerCnxn.close() does. Also, WatchLeakTest does not test watch leaks in Netty. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 31 weeks, 2 days ago | 0|i2s2jr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2357 | Unhandled errors propagating through cluster |
Task | Open | Minor | Unresolved | Unassigned | Gareth Humphries | Gareth Humphries | 22/Jan/16 11:26 | 05/Nov/17 15:32 | 3.4.6 | leaderElection, quorum, server | 1 | 5 | Hi, I need some help understanding a recurring problem we're seeing with our zookeeper cluster. It's a five node cluster that ordinarily runs fine. Occasionally we see an error from which the cluster recovers, but it causes a lot of grief and I'm sure is representative of an unhealthy situation. To my eye it looks like an invalid bit of data getting into the system and not being handled gracefully; I'm the first to say my eye is not expert though, so I humbly submit an annotated log exert in the hope some who knows more than me can provide some illumination. The cluster seems to be ticking along fine, until we get errors on 2 of the 5 nodes like so: 2016-01-19 13:12:49,698 - WARN [QuorumPeer[myid=1]/0.0.0.0:2181:Follower@89] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:103) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786) 2016-01-19 13:12:49,698 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:790) This is immediately followed by 380 occurences of: 2016-01-19 13:12:49,699 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /X.Y.Z.56:59028 which had sessionid 0x151b01ee8330234 and a: 2016-01-19 13:12:49,766 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:FollowerZooKeeperServer@139] - Shutting down 2016-01-19 13:12:49,766 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:ZooKeeperServer@441] - shutting down 2016-01-19 13:12:49,766 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:FollowerRequestProcessor@105] - Shutting down 2016-01-19 13:12:49,766 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:CommitProcessor@181] - Shutting down 2016-01-19 13:12:49,766 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:FinalRequestProcessor@415] - shutdown of request processor complete 2016-01-19 13:12:49,767 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:SyncRequestProcessor@209] - Shutting down 2016-01-19 13:12:49,767 - INFO [CommitProcessor:1:CommitProcessor@150] - CommitProcessor exited loop! 2016-01-19 13:12:49,767 - INFO [FollowerRequestProcessor:1:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! 2016-01-19 13:13:09,418 - WARN [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 30334ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2016-01-19 13:13:09,427 - WARN [SyncThread:1:SendAckRequestProcessor@64] - Closing connection to leader, exception during packet send java.net.SocketException: Socket closed at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:121) at java.net.SocketOutputStream.write(SocketOutputStream.java:159) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at org.apache.zookeeper.server.quorum.Learner.writePacket(Learner.java:139) at org.apache.zookeeper.server.quorum.SendAckRequestProcessor.flush(SendAckRequestProcessor.java:62) at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:204) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131) 2016-01-19 13:13:09,428 - INFO [SyncThread:1:SyncRequestProcessor@187] - SyncRequestProcessor exited! As a small aside, the fsync log errors for the first two servers to be hit are: 2016-01-19 13:13:09,418 - WARN [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 30334ms which will adversely effect operation latency. 2016-01-19 13:13:09,539 - WARN [SyncThread:2:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:2 took 30456ms which will adversely effect operation latency. If rewind from date of the entry the milliseconds given, you arrive within one millisecond of the same time on each server. But I digress. For the next 12 minutes or so, the logs are full of the below sort of exceptions, in seemingly no consistent order or frequency: 2016-01-19 13:13:09,440 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2016-01-19 13:13:09,441 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /X.Y.Z.181:51381 (no session established for client) 2016-01-19 13:13:09,443 - WARN [QuorumPeer[myid=1]/0.0.0.0:2181:SendAckRequestProcessor@64] - Closing connection to leader, exception during packet send java.net.SocketException: Socket closed at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:121) at java.net.SocketOutputStream.write(SocketOutputStream.java:159) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at org.apache.zookeeper.server.quorum.Learner.writePacket(Learner.java:139) at org.apache.zookeeper.server.quorum.SendAckRequestProcessor.flush(SendAckRequestProcessor.java:62) at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:204) at org.apache.zookeeper.server.SyncRequestProcessor.shutdown(SyncRequestProcessor.java:216) at org.apache.zookeeper.server.quorum.FollowerZooKeeperServer.shutdown(FollowerZooKeeperServer.java:147) at org.apache.zookeeper.server.quorum.Learner.shutdown(Learner.java:546) at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:167) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:790) 2016-01-19 13:13:09,443 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumPeer@714] - LOOKING 2016-01-19 13:13:11,782 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1525a047dc20005, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:744) 2016-01-19 13:13:11,783 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /X.Y.Z.1:59576 which had sessionid 0x1525a047dc20005 2016-01-19 13:13:11,784 - ERROR [CommitProcessor:1:NIOServerCnxn@178] - Unexpected Exception: java.nio.channels.CancelledKeyException at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:77) at org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151) at org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1081) at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:404) at org.apache.zookeeper.server.quorum.CommitProcessor.run(CommitProcessor.java:74) 2016-01-19 13:25:43,898 - INFO [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x2a001d352d (n.zxid), 0xb (n.round), LOOKING (n.state), 2 (n.sid), 0x2a (n.peerEpoch) FOLLOWING (my state) 2016-01-19 13:25:43,901 - WARN [QuorumPeer[myid=1]/0.0.0.0:2181:Follower@89] - Exception when following the leader java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:152) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:103) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786) 2016-01-19 13:25:43,901 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:790) Until eventually we get to: 2016-01-19 13:26:05,099 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumPeer@784] - FOLLOWING 2016-01-19 13:26:05,099 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /var/lib/zookeeper_1/data/version-2 snapdir /var/lib/zookeeper_1/data/version-2 2016-01-19 13:26:05,099 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 21179 2016-01-19 13:26:05,100 - WARN [QuorumPeer[myid=1]/0.0.0.0:2181:Learner@233] - Unexpected exception, tries=0, connecting to zoo005/X.Y.Z.71:2888 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:225) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:71) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786) And things start to come right. Right about now, the three member which had so far escaped begin to exhibit the same behaviour. Again, if we look at the fsync messages: 2016-01-19 13:26:06,192 - WARN [SyncThread:3:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:3 took 51394ms which will adversely effect operation latency. 2016-01-19 13:26:05,960 - WARN [SyncThread:4:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:4 took 51162ms which will adversely effect operation latency. 2016-01-19 13:26:04,524 - WARN [SyncThread:5:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:5 took 49726ms which will adversely effect operation latency. And we rewind the number of milliseconds from the log entry timestamp, we arrive at exactly 13:25:14,798 for all three events. So, it looks for all the world like something entered the system at 13:12:39,084, caused havoc on two nodes for 12.5 minutes, then at 13:25:14,798 it got off those and made to the other three, where it again caused havoc, before things eventually recovered and the world kept on ticking, only a medium sized log explosion worse for it. There is nothing in any of the logs within a second of either of those times. I'm hoping someone familiar with the code can look at those stack traces and understand what might cause such an incident. I'm to help anyway I can. I have more complete logs, and we see this every couple of weeks or so, so can setup some additional logging if it would be of value. Let me know. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 19 weeks, 4 days ago | 0|i2ruov: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2356 | SaslServer don't keep user information.Cant find who create the znode. |
Improvement | Open | Major | Unresolved | Unassigned | caixiaofeng | caixiaofeng | 20/Jan/16 21:12 | 20/Jan/16 21:12 | 3.5.1 | server | 0 | 5 | kerberos server | as zookeper data only keep data like this,will not keep any user information for create user。 then don't know who create this znode in Sasl Auth env。 hope zookeper can store create/modify user information,such as other file system.。 [zk: x.x.x.x:24002(CONNECTED) 2] stat /t2 cZxid = 0x10001e409 ctime = Tue Dec 22 17:51:50 CST 2015 mZxid = 0x10001e409 mtime = Tue Dec 22 17:51:50 CST 2015 pZxid = 0x10001e409 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 7 numChildren = 0 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 9 weeks ago | 0|i2rrz3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2355 | Ephemeral node is never deleted if follower fails while reading the proposal packet |
Bug | Resolved | Critical | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 18/Jan/16 09:54 | 13/Nov/18 08:48 | 03/Aug/17 11:58 | 3.4.8, 3.4.9, 3.4.10, 3.5.1, 3.5.2, 3.5.3 | 3.4.11, 3.5.4, 3.6.0 | quorum, server | 12 | 21 | ZOOKEEPER-2834 | KNOX-1599, ZOOKEEPER-2348, CURATOR-409, ZOOKEEPER-3040 | ZooKeeper ephemeral node is never deleted if follower fail while reading the proposal packet The scenario is as follows: # Configure three node ZooKeeper cluster, lets say nodes are A, B and C, start all, assume A is leader, B and C are follower # Connect to any of the server and create ephemeral node /e1 # Close the session, ephemeral node /e1 will go for deletion # While receiving delete proposal make Follower B to fail with {{SocketTimeoutException}}. This we need to do to reproduce the scenario otherwise in production environment it happens because of network fault. # Remove the fault, just check that faulted Follower is now connected with quorum # Connect to any of the server, create the same ephemeral node /e1, created is success. # Close the session, ephemeral node /e1 will go for deletion # {color:red}/e1 is not deleted from the faulted Follower B, It should have been deleted as it was again created with another session{color} # {color:green}/e1 is deleted from Leader A and other Follower C{color} |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 2 years, 33 weeks ago | 0|i2rn73: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2354 | ZOOKEEPER-1653 not merged in master and 3.5 branch |
Bug | Patch Available | Major | Unresolved | Sujith Simon | Mohammad Arshad | Mohammad Arshad | 12/Jan/16 10:20 | 05/Feb/20 07:12 | 3.7.0, 3.5.8 | 3 | 6 | 0 | 7200 | ZOOKEEPER-1653 is merged only to 3.4 branch. It should be merged to 3.5 and master branch as well. |
100% | 100% | 7200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 24 weeks, 2 days ago | 0|i2r6sf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2353 | QuorumCnxManager protocol needs to be upgradable with-in a specific Version |
Improvement | Open | Major | Unresolved | Unassigned | Powell Molleti | Powell Molleti | 11/Jan/16 14:54 | 18/Jan/16 19:36 | 3.4.7, 3.5.1 | 0 | 7 | Currently 3.5.X sends its hdr as follows: {code:title=QuorumCnxManager.java|borderStyle=solid} dout.writeLong(PROTOCOL_VERSION); dout.writeLong(self.getId()); String addr = self.getElectionAddress().getHostString() + ":" + self.getElectionAddress().getPort(); byte[] addr_bytes = addr.getBytes(); dout.writeInt(addr_bytes.length); dout.write(addr_bytes); dout.flush(); {code} Since it writes length of host and port byte string there is no simple way to append new fields to this hdr anymore. I.e the rx side has to consider all bytes after sid for host and port parsing, which is what it does here: [QuorumCnxManager.InitialMessage.parse(): http://bit.ly/1Q0znpW] {code:title=QuorumCnxManager.java|borderStyle=solid} sid = din.readLong(); int remaining = din.readInt(); if (remaining <= 0 || remaining > maxBuffer) { throw new InitialMessageException( "Unreasonable buffer length: %s", remaining); } byte[] b = new byte[remaining]; int num_read = din.read(b); if (num_read != remaining) { throw new InitialMessageException( "Read only %s bytes out of %s sent by server %s", num_read, remaining, sid); } // FIXME: IPv6 is not supported. Using something like Guava's HostAndPort // parser would be good. String addr = new String(b); String[] host_port = addr.split(":"); {code} This has been captured in the discussion here: ZOOKEEPER-2186. Though it is possible to circumvent this problem by various means the request here is to design messages with hdr such that there is no need to bump version number or hack certain fields (i.e figure out if its length of host/port or length of different message etc, in the above case). This is the idea here as captured in ZOOKEEPER-2186. {code:java} dout.writeLong(PROTOCOL_VERSION); String addr = self.getElectionAddress().getHostString() + ":" + self.getElectionAddress().getPort(); byte[] addr_bytes = addr.getBytes(); // After version write the total length of msg sent by sender. dout.writeInt(Long.BYTES + addr_bytes.length); // Write sid afterwards dout.writeLong(self.getId()); // Write length of host/port string dout.writeInt(addr_bytes.length); // Write host/port string dout.write(addr_bytes); {code} Since total length of the message and length of each variable field is also present it is quite easy to provide backward compatibility, w.r.t to parsing of the message. Older code will read the length of message it knows and ignore the rest. Newer revision(s), that wants to keep things compatible, will only append to hdr and not change the meaning of current fields. I am guessing this was the original intent w.r.t the introduction of protocol version here: ZOOKEEPER-1633 Since 3.4.x code does not parse this and 3.5.x is still in alpha mode perhaps it is possible to consider this change now?. Also I would like to propose to carefully consider the option of using protobufs for the next protocol version bump. This will prevent issues like this in the future. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 9 weeks, 2 days ago | 0|i2r593: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2352 | rpm build broke |
Bug | Resolved | Major | Won't Fix | Unassigned | Wenjie Ding | Wenjie Ding | 05/Jan/16 15:14 | 03/Mar/16 11:20 | 03/Mar/16 11:20 | 3.5.1 | build | 0 | 3 | 36000 | 36000 | 0% | ubuntu 15.10 | rpm ‘cd’ to the BUILD directory, delete the directory it just ‘cd’ to. What was going on from that point was that it called some script which was loading /bin/bash and crashed since ‘getcwd’ cannot access the parent directory which had been deleted. build output messages; ... [rpm] + cd /tmp/zkpython_build_root/BUILD [rpm] + '[' /tmp/zkpython_build_root/BUILD '!=' / ']' [rpm] + rm -rf /tmp/zkpython_build_root/BUILD [rpm] ++ dirname /tmp/zkpython_build_root/BUILD [rpm] + mkdir -p /tmp/zkpython_build_root [rpm] + mkdir /tmp/zkpython_build_root/BUILD [rpm] + /usr/lib/rpm/check-buildroot [rpm] shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory [rpm] + /usr/lib/rpm/redhat/brp-compress [rpm] shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory [rpm] chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory [rpm] + /usr/lib/rpm/redhat/brp-strip /usr/bin/strip [rpm] shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory [rpm] + /usr/lib/rpm/redhat/brp-strip-static-archive /usr/bin/strip [rpm] shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory [rpm] + /usr/lib/rpm/redhat/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump [rpm] shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory [rpm] + /usr/lib/rpm/brp-python-bytecompile [rpm] shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory |
0% | 0% | 36000 | 36000 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i2qthz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2351 | %JAVA_HOME% in bin\zkEnv.cmd does not work on Windows 8 and Windows 10 |
Bug | Resolved | Major | Duplicate | Unassigned | LEON YU | LEON YU | 03/Jan/16 22:34 | 04/Jan/16 18:02 | 04/Jan/16 18:02 | 3.4.7 | scripts | 0 | 2 | ZOOKEEPER-2281 | %JAVA_HOME% does not work in zkEnv.cmd, so zkServer.cmd and zkClient.cmd can not run and zookeeper can not start. Temporary solution to this is to use quotation marks "%JAVA_HOME%" to replace %JAVA_HOME% in zkEnv.cmd. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 11 weeks, 3 days ago |
Incompatible change
|
0|i2qqi7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2350 | Script that provide a way to build the ensemble with ease. |
New Feature | Open | Minor | Unresolved | Unassigned | Minoru Osuka | Minoru Osuka | 27/Dec/15 20:33 | 29/Feb/16 00:31 | 3.5.0 | scripts | 1 | 3 | Starting with 3.5.0, ZooKeeper has already supported a very convenient dynamic reconfiguration. However, the procedure is slightly complicated. So I propose adding a script to build ensemble with ease. Usage: {noformat} $ ./bin/zkEnsemble.sh help usage: ./bin/zkEnsemble.sh {start|stop|status} <parameters> Commadns: start Start a node of ensemble. Parameters: --seed Specify the IP address and port of an existing ensemble node that required for 2nd and subsequent nodes. This is not required for the 1st node. (Example: 127.0.0.1:2181) --ip Normally, you do not need to specify because it is automatically detected. If it seems the wrong IP address is found automatically, you can over ride the IP address with this option. --clientport The port is used to client connections (2181 by default). --peerport The port is used to talk to each other (2888 by default). If omitted, it will use the minimum port number that is available between 2888 to 3142. --electionport The port is used to leader election (3888 by default). If omitted, it will use the minimum port number that is available between 3888 to 4142. --role The role of node, it can be participant or observer (participant by default). --clientip The IP address for client connections (0.0.0.0 by default). If omitted, it will use the minimum port number that is available between 2181 to 2435. --confdir Specify a base conf directory (/Users/mosuka/git/zookeeper/conf by default). --datadir Specify a base data directory (/tmp/zookeeper by default). --foreground Start node in foreground. stop Stop a node of ensemble. Parameters: --ip Normally, you do not need to specify because it is automatically detected. If it seems the wrong IP address is found automatically, you can over ride the IP address with this option. --clientport The port is used to client connections (2181 by default). status Show ensemble nodes. Parameters: --seed Specify the IP address and port of a existing ensemble node (Example: 127.0.0.1:2181). help Display this message. {noformat} Example: 1. Start a 1st node of ensemble on host1(192.168.33.11) {noformat} $ ./bin/zkEnsemble.sh start ZooKeeper JMX enabled by default Using config: /Users/minoru/zookeeper/zookeeper-3.5.0/conf/server.1.cfg Starting zookeeper ... STARTED {noformat} 2. Start a 2nd node of ensemble on host2(192.168.33.12). {noformat} $ ./bin/zkEnsemble.sh start --seed=192.168.33.11:2181 ZooKeeper JMX enabled by default Using config: /Users/minoru/zookeeper/zookeeper-3.5.0/conf/server.2.cfg Starting zookeeper ... STARTED {noformat} 3. Start a 3rd node of ensemble on host3(192.168.33.13). {noformat} $ ./bin/zkEnsemble.sh start --seed=192.168.33.11:2181 ZooKeeper JMX enabled by default Using config: /Users/minoru/zookeeper/zookeeper-3.5.0/conf/server.3.cfg Starting zookeeper ... STARTED {noformat} 4. Show ensemble nodes on host1(192.168.33.11). {noformat} $ ./bin/zkEnsemble.sh status --seed=192.168.33.11:2181 server.1=192.168.33.11:2888:3888:participant;0.0.0.0:2181 server.2=192.168.33.12:2888:3888:participant;0.0.0.0:2181 server.3=192.168.33.13:2888:3888:participant;0.0.0.0:2181 {noformat} 5. Stop a 2nd node of ensemble on host2(192.168.33.12). {noformat} $ ./bin/zkEnsemble.sh stop Using config: /Users/minoru/zookeeper/zookeeper-3.5.0/conf/server.2.cfg Stopping zookeeper ... STOPPED {noformat} 6. Show ensemble nodes on host1(192.168.33.11). {noformat} $ ./bin/zkEnsemble.sh status --seed=192.168.33.11:2181 server.1=192.168.33.11:2888:3888:participant;0.0.0.0:2181 server.3=192.168.33.13:2888:3888:participant;0.0.0.0:2181 {noformat} 7. Start a 2nd node of ensemble on host2(192.168.33.12). {noformat} $ ./bin/zkEnsemble.sh start --seed=192.168.33.11:2181 ZooKeeper JMX enabled by default Using config: /Users/minoru/zookeeper/zookeeper-3.5.0/conf/server.2.cfg Starting zookeeper ... STARTED {noformat} 8. Show ensemble nodes on host1(192.168.33.11). {noformat} $ ./bin/zkEnsemble.sh status --seed=192.168.33.11:2181 server.1=192.168.33.11:2888:3888:participant;0.0.0.0:2181 server.2=192.168.33.12:2888:3888:participant;0.0.0.0:2181 server.3=192.168.33.13:2888:3888:participant;0.0.0.0:2181 {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 8 | 9223372036854775807 | 4 years, 3 weeks, 3 days ago | 0|i2qbxb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2349 | Update documentation for snapCount |
Bug | Resolved | Minor | Fixed | maoling | Raghavendra Prabhu | Raghavendra Prabhu | 21/Dec/15 09:17 | 11/Sep/17 17:37 | 11/Sep/17 16:43 | 3.4.11, 3.5.4, 3.6.0 | documentation | 0 | 7 | The documentation states that {code} ZooKeeper logs transactions to a transaction log. After snapCount transactions are written to a log file a snapshot is started and a new transaction log file is created. The default snapCount is 100,000. {code} However, in implementation, snapshotting is done when logCount is somwhere in (snapCount/2, snapCount+1], based on the limit set at runtime: {code} if (logCount > (snapCount / 2 + randRoll)) { {code} as in https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/server/SyncRequestProcessor.java#L124 |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 27 weeks, 3 days ago | 0|i2q5sn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2348 | Data between leader and followers are not synchronized. |
Bug | Open | Major | Unresolved | Unassigned | Echo Chen | Echo Chen | 18/Dec/15 05:17 | 30/Apr/19 00:56 | 3.5.1 | 0 | 9 | ZOOKEEPER-2355 | When client session expired, leader tried to remove it from session map and remove its EPHEMERAL znode, for example, /test_znode. This operation succeed on leader, but at the very same time, network fault happended and not synced to followers, a new leader election launched. After leader election finished, the new leader is not the old leader. we found the znode /test_znode still existed in the followers but not on leader *Scenario :* 1) Create znode E.g. {{/rmstore/ZKRMStateRoot/RMAppRoot/application_1449644945944_0001/appattempt_1449644945944_0001_000001}} 2) Delete Znode. 3) Network fault b/w follower and leader machines 4) leader election again and follower became leader. Now data is not synced with new leader..After this client is not able to same znode. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 46 weeks, 2 days ago | 0|i2q32n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2347 | Deadlock shutting down zookeeper |
Bug | Resolved | Blocker | Fixed | Rakesh Radhakrishnan | Ted Yu | Ted Yu | 16/Dec/15 16:01 | 14/Jan/16 00:22 | 14/Jan/16 00:22 | 3.4.7 | 3.4.8 | 0 | 10 | HBase recently upgraded to zookeeper 3.4.7 In one of the tests, TestSplitLogManager, there is reproducible hang at the end of the test. Below is snippet from stack trace related to zookeeper: {code} "main-EventThread" daemon prio=5 tid=0x00007fd27488a800 nid=0x6f1f waiting on condition [0x000000011834b000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00000007c5b8d3a0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "main-SendThread(localhost:59510)" daemon prio=5 tid=0x00007fd274eb4000 nid=0x9513 waiting on condition [0x0000000118042000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060) "SyncThread:0" prio=5 tid=0x00007fd274d02000 nid=0x730f waiting for monitor entry [0x00000001170ac000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.zookeeper.server.ZooKeeperServer.decInProcess(ZooKeeperServer.java:512) - waiting to lock <0x00000007c5b62128> (a org.apache.zookeeper.server.ZooKeeperServer) at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:144) at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:200) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131) "main-EventThread" daemon prio=5 tid=0x00007fd2753a3800 nid=0x711b waiting on condition [0x0000000117a30000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00000007c9b106b8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) "main" prio=5 tid=0x00007fd276000000 nid=0x1903 in Object.wait() [0x0000000108aa1000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00000007c5b66400> (a org.apache.zookeeper.server.SyncRequestProcessor) at java.lang.Thread.join(Thread.java:1281) - locked <0x00000007c5b66400> (a org.apache.zookeeper.server.SyncRequestProcessor) at java.lang.Thread.join(Thread.java:1355) at org.apache.zookeeper.server.SyncRequestProcessor.shutdown(SyncRequestProcessor.java:213) at org.apache.zookeeper.server.PrepRequestProcessor.shutdown(PrepRequestProcessor.java:770) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:478) - locked <0x00000007c5b62128> (a org.apache.zookeeper.server.ZooKeeperServer) at org.apache.zookeeper.server.NIOServerCnxnFactory.shutdown(NIOServerCnxnFactory.java:266) at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.shutdown(MiniZooKeeperCluster.java:301) {code} Note the address (0x00000007c5b66400) in the last hunk which seems to indicate some form of deadlock. According to Camille Fournier: We made shutdown synchronized. But decrementing the requests is also synchronized and called from a different thread. So yeah, deadlock. This came in with ZOOKEEPER-1907 |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 10 weeks ago | 0|i2q00v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2346 | SASL Auth failure manifested to client as connection refusal |
Bug | Patch Available | Major | Unresolved | Meyer Kizner | Steve Loughran | Steve Loughran | 15/Dec/15 12:59 | 28/Nov/16 14:44 | 3.4.6 | server | 0 | 5 | ZOOKEEPER-2344 | If a client can't authenticate via sasl then (a) the stack trace is lost on the server logs, and (b) it is exposed to the client as a connection refusal. This results in curator retrying many times before giving up —and with the cause being misinterpreted as a server-down problem, rather than a client-not-trusted problem | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 16 weeks, 3 days ago | 0|i2pxfb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2345 | ZOOKEEPER-2344 ServerCnxnFactory.configureSaslLogin() loses stack trace on auth failures |
Sub-task | Open | Major | Unresolved | Unassigned | Steve Loughran | Steve Loughran | 15/Dec/15 12:50 | 22/Mar/16 06:01 | 3.4.6 | server | 0 | 6 | ZOOKEEPER-2365 | When there's a problem authenticating in {{ServerCnxnFactory.configureSaslLogin() }}, the exception is retained -but the full stack lost. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 6 weeks, 6 days ago | 0|i2pxdr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2344 | Provide more diagnostics/stack traces on SASL Auth failure |
Improvement | Open | Major | Unresolved | Unassigned | Steve Loughran | Steve Loughran | 15/Dec/15 12:47 | 22/Jun/18 00:49 | 3.4.7, 3.5.1, 3.4.11 | java client, server | 0 | 8 | ZOOKEEPER-2345 | ZOOKEEPER-2035, ZOOKEEPER-2346, HADOOP-12649 | When Kerberos decides it doesn't want to work, the JRE libraries provide some terse and unhelpful error messages. The only way to debug the problem is (a) to have complete stack traces and (b) as much related information as possible. Zookeeper could do more here. Currently too much of the code loses stack traces; sometimes auth errors aren't reported back to the client (the connection is closed) +others Everyone who has tried to diagnose kerberos problems will appreciate improvements here |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 31 weeks, 2 days ago | 0|i2pxcv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2343 | Zookeeper 3.5.1 failed to deploy into the kubernetes |
Bug | Open | Major | Unresolved | Unassigned | cheyang | cheyang | 15/Dec/15 06:06 | 30/Nov/17 04:27 | 3.5.1 | 3.5.1 | quorum | 0 | 5 | CentOS Linux release 7.1.1503 (Core) openjdk version "1.8.0_65" OpenJDK Runtime Environment (build 1.8.0_65-b17) OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode) zookeeper: 3.5.1-alpha-1693007 |
I'd like to setup 3-nodes zookeeper cluster with version 3.5.1. Because the network model of kubernetes is that pod and service has different ip address. In order to deploy it into kubernetes, I have to make zookeeper pods itself to be 0.0.0.0, so it can be started correctly. the configuration as below: zk1: zoo.cfg standaloneEnabled=false dynamicConfigFile=/opt/zookeeper/conf/zoo.cfg.dynamic zoo.cfg.dynamic server.1=0.0.0.0:2888:3888:participant;2181 server.2=10.62.56.192:2888:3888:participant;2181 server.3=10.62.56.193:2888:3888:participant;2181 zk2: zoo.cfg standaloneEnabled=false dynamicConfigFile=/opt/zookeeper/conf/zoo.cfg.dynamic zoo.cfg.dynamic server.1=10.62.56.191:2888:3888:participant;2181 server.2=0.0.0.0:2888:3888:participant;2181 server.3=10.62.56.193:2888:3888:participant;2181 zk3: zoo.cfg standaloneEnabled=false dynamicConfigFile=/opt/zookeeper/conf/zoo.cfg.dynamic zoo.cfg.dynamic server.1=10.62.56.191:2888:3888:participant;2181 server.2=10.62.56.192:2888:3888:participant;2181 server.3=0.0.0.0:2888:3888:participant;218 The result is that: 1. Looks like the election is successful. a new dynamic file is generated in every node:/opt/zookeeper/conf/zoo.cfg.dynamic.100000000 like below server.1=10.62.56.191:2888:3888:participant;0.0.0.0:2181 server.2=0.0.0.0:2888:3888:participant;0.0.0.0:2181 server.3=10.62.56.193:2888:3888:participant;0.0.0.0:2181 2. But the cluster doesn't really work, I saw the errors: 0:0:2181)(secure=disabled):Learner@273] - Unexpected exception, tries=3, remaining init limit=16997, connecting to /0.0.0.0:2888 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.Learner.sockConnect(Learner.java:227) at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:256) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:74) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1064) 2015-12-15 04:35:00,403 [myid:1] - INFO 2015-12-15 04:35:00,585 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1068) |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 |
Important
|
2 years, 16 weeks ago | dynamic reconfiguration | 0|i2pwlr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2342 | Migrate to Log4J 2. |
Bug | Open | Major | Unresolved | Chris Nauroth | Chris Nauroth | Chris Nauroth | 09/Dec/15 19:59 | 25/Feb/20 11:50 | 3.7.0 | 3 | 24 | LOG4J2-1514, ZOOKEEPER-3737, ZOOKEEPER-2393, ZOOKEEPER-1371, HADOOP-12956, LOG4J2-63, ZOOKEEPER-2659, ZOOKEEPER-3677 | ZOOKEEPER-1371 removed our source code dependency on Log4J. It appears that this also removed the Log4J SLF4J binding jar from the runtime classpath. Without any SLF4J binding jar available on the runtime classpath, it is impossible to write logs. This JIRA investigated migration to Log4J 2 as a possible path towards resolving the bug introduced by ZOOKEEPER-1371. At this point, we know this is not feasible short-term. This JIRA remains open to track long-term migration to Log4J 2. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 weeks, 5 days ago | 0|i2poyf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2341 | bin/zkEnv.cmd needs quotes around %JAVA% |
Bug | Resolved | Trivial | Duplicate | Unassigned | Karl Mortensen | Karl Mortensen | 09/Dec/15 17:13 | 09/Dec/15 17:24 | 09/Dec/15 17:24 | 3.4.6 | 0 | 2 | ZOOKEEPER-2281 | I just downloaded ZooKeeper 3.4.7 (wouldn't let me put that version in the "Affects Version/s" field) and it doesn't work out of the box on Windows 7, which is brutal for folks who don't understand. It complains that you don't have JAVA_HOME set right if you have it set to a path with spaces e.g. C:\program files\java\blah will fail. All the following need quotes around and %VARIABLE% expansions to deal with potential spaces in the path: * bin/zkCli.cmd * bin/zkEnv.cmd * bin/zkServer.cmd Should be a trivial fix. Definition of Done: zkCli.cmd, zkEnv.cmd and zkServer.cm work out of the box on Windows 7. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 15 weeks, 1 day ago | 0|i2ponj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2340 | JMX is disabled even if JMXDISABLE is false |
Bug | Closed | Minor | Fixed | Mohammad Arshad | Neha Bathra | Neha Bathra | 09/Dec/15 00:33 | 21/Jul/16 16:18 | 10/Dec/15 00:33 | 3.4.8, 3.5.2, 3.6.0 | 0 | 7 | Currently, to enable jmx for zookeeper, need to comment the property JMXDISABLE as JMXDISABLE=false continues to disable JMX. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 15 weeks ago | 0|i2pn53: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2339 | Document differences in ZooKeeper usage on Windows. |
Improvement | Open | Major | Unresolved | Unassigned | Chris Nauroth | Chris Nauroth | 08/Dec/15 18:39 | 08/Dec/15 18:39 | documentation | 0 | 1 | The ZooKeeper documentation focuses primarily on usage for Unix-like systems. There are some small differences in usage on Windows. It would be good to enhance the documentation to cover this. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 15 weeks, 2 days ago | 0|i2pmmn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2338 | c bindings should create socket's with SOCK_CLOEXEC to avoid fd leaks on fork/exec |
Bug | Resolved | Major | Fixed | Radu Brumariu | James DeFelice | James DeFelice | 08/Dec/15 11:57 | 19/Mar/19 20:25 | 06/Dec/17 17:59 | 3.5.3, 3.4.11, 3.6.0 | 3.5.4, 3.6.0 | c client | 0 | 5 | 0 | 600 | MESOS-4065 | I've observed socket FD leaks in Apache Mesos when using ZK to coordinate master leadership: https://issues.apache.org/jira/browse/MESOS-4065 | 100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 8 weeks, 6 days ago | 0|i2plxb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2337 | Fake "invalid" hostnames used in tests are sometimes valid |
Bug | Closed | Major | Duplicate | Timothy James Ward | Timothy James Ward | Timothy James Ward | 08/Dec/15 10:14 | 19/Dec/19 18:01 | 08/Dec/15 10:44 | 3.4.7, 3.5.1, 3.6.0 | 3.4.8, 3.5.2 | 0 | 2 | ZOOKEEPER-2252 | Some of the ZooKeeper tests use "fake" hostnames to trigger host resolution failures. The problem with this is that it uses valid hostnames which are sometimes configured in VMs. At the moment I am unable to build cleanly because I get test failures on the two test methods that do this. The tests work equally well if syntactically invalid hostnames are used, and the test cases become more portable at the same time. The affected test cases are: org.apache.zookeeper.test.StaticHostProviderTest.testTwoInvalidHostAddresses and org.apache.zookeeper.test.StaticHostProviderTest.testOneInvalidHostAddresses See GitHub pull request https://github.com/apache/zookeeper/pull/48 for a proposed fix |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 15 weeks, 2 days ago | 0|i2plrz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2336 | Jenkins not working due to old SVN |
Test | Resolved | Major | Fixed | Akihiro Suda | Akihiro Suda | Akihiro Suda | 07/Dec/15 03:12 | 03/Mar/16 02:58 | 03/Mar/16 02:58 | build | 06/Dec/15 | 0 | 3 | INFRA-10919 | Jenkins seems not working since Build #2976 (Dec 6, 2015) due to SVN. https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2976/console {panel} [exec] svn: E155036: Please see the 'svn upgrade' command [exec] svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk' [exec] is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. [exec] {panel} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i2piof: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2335 | Java Compilation Error in ClientCnxn.java |
Bug | Resolved | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 06/Dec/15 00:28 | 16/Oct/16 11:23 | 06/Dec/15 14:01 | 3.5.2, 3.6.0 | java client, server | 0 | 4 | ZOOKEEPER-2330 | There are some compilation error in latest trunk code. {code} [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\ClientCnxn.java:49: error: package org.apache.log4j does not exist [javac] import org.apache.log4j.MDC; [javac] ^ [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\ClientCnxn.java:1108: error: cannot find symbol [javac] MDC.put("myid", hostPort); [javac] ^ [javac] symbol: variable MDC [javac] location: class ClientCnxn.SendThread [javac] 2 errors {code} This compilation error got introduced by ZOOKEEPER-2330 patch. This patch used log4j api and log4 dependency has already been removed by ZOOKEEPER-1371 |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 15 weeks, 4 days ago | 0|i2pi0n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2334 | Zookeeper Archives Out Date |
Bug | Resolved | Major | Fixed | Flavio Paiva Junqueira | Elias Levy | Elias Levy | 04/Dec/15 15:19 | 16/Dec/15 10:35 | 16/Dec/15 10:35 | 0 | 5 | The Zookeeper download page and mirrors only track the latest version of the mirror release versions. The page has a link to the archives page at archive.apache.org, but that page is missing all releases after 3.3.2. That means there are a large number of releases that disappear from the official download site when a new release is published. In my particular case I was building a container based on 3.4.6. Once 3.4.7 came out my build broke and it cannot be fixed as 3.4.7 can't be downloaded from anywhere official. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 14 weeks, 1 day ago | 0|i2pfev: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2332 | Zookeeper failed to start for empty txn log |
Bug | Patch Available | Critical | Unresolved | Shaohui Liu | Shaohui Liu | Shaohui Liu | 27/Nov/15 08:42 | 14/Dec/19 06:08 | 3.4.6 | 3.7.0 | 2 | 15 | ZOOKEEPER-2376 | We found that the zookeeper server with version 3.4.6 failed to start for there is a empty txn log in log dir. I think we should skip the empty log file during restoring the datatree. Any suggestion? {code} 2015-11-27 19:16:16,887 [myid:] - ERROR [main:ZooKeeperServerMain@63] - Unexpected exception, exiting abnormally java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:576) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:595) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:561) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:643) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:272) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:399) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:122) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:113) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) {code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 17 weeks ago | 0|i2p0fr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2331 | Many instances of ZookeeperSaslClient in one process - every instance should use different jaas config section. |
Wish | Resolved | Minor | Duplicate | Unassigned | Piotr Chmieliński | Piotr Chmieliński | 26/Nov/15 11:39 | 24/Mar/16 01:18 | 28/Nov/15 18:11 | java client | 0 | 2 | ZOOKEEPER-2139 | First of all I don't know if "Wish" is the best task type for it. I just want to ask you a question. Why login field in ZooKeeperSaslClient is a static one? https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/client/ZooKeeperSaslClient.java#L81 I'd like to have many zookeeper clients in one process. I want each of them to read different section from jaas config. I know that I can specify which one should be read by setting system property: https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/client/ZooKeeperSaslClient.java#L114 Unfortunately login field is static and it is instantiated during creation of first ZooKeeperSaslClient instance. Maybe there is some reason behind decision of making "login" static - if yes, could you please explain it? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 16 weeks, 5 days ago | 0|i2ozhz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2330 | ZooKeeper close API does not close Login thread. |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 23/Nov/15 05:03 | 21/Jul/16 16:18 | 08/May/16 16:24 | 3.5.0 | 3.5.2, 3.6.0 | java client | 0 | 5 | ZOOKEEPER-2396, ZOOKEEPER-2335 | When kerberos is used as authentication mechanism, one login thread runs in the background for ZooKeeper client as well ZooKeepr server. This problem is related to Zookeeper client and the scenario is as follows: # Main application connects to Zookeeper {code} ZooKeeper zooKeeper = new ZooKeeper(zookeeperConnectionString, sessionTimeout, this) {code} # Completes it is work with zookeeper # calls close() on zookeeper, and continues with rest of the application specific work Thread dump, taken after 3rd step, shows that login thread is still alive {code} "Thread-1" daemon prio=6 tid=0x04842c00 nid=0x1f04 waiting on condition [0x05b7f000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.Login$1.run(Login.java:180) at java.lang.Thread.run(Thread.java:722) {code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 45 weeks, 4 days ago | 0|i2os1j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2329 | Clear javac and javadoc warning from zookeeper |
Bug | Closed | Minor | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 21/Nov/15 16:41 | 21/Jul/16 16:18 | 21/Nov/15 18:35 | 3.5.2 | 0 | 4 | Currently ZooKeeper java code has 10 javac and 1 javadoc warning. These should be removed. *javac warnings* {noformat} [javac] Compiling 228 source files to D:\gitHome\zookeeperTrunk\build\classes [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\ZooKeeperMain.java:226: warning: [rawtypes] found raw type: List [javac] List args = new LinkedList(); [javac] ^ [javac] missing type arguments for generic class List<E> [javac] where E is a type-variable: [javac] E extends Object declared in interface List [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\ZooKeeperMain.java:226: warning: [rawtypes] found raw type: LinkedList [javac] List args = new LinkedList(); [javac] ^ [javac] missing type arguments for generic class LinkedList<E> [javac] where E is a type-variable: [javac] E extends Object declared in class LinkedList [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\ZooKeeperMain.java:233: warning: [unchecked] unchecked call to add(E) as a member of the raw type List [javac] args.add(value); [javac] ^ [javac] where E is a type-variable: [javac] E extends Object declared in interface List [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\ZooKeeperMain.java:239: warning: [unchecked] unchecked conversion [javac] cmdArgs = args; [javac] ^ [javac] required: List<String> [javac] found: List [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\jmx\ManagedUtil.java:62: warning: [rawtypes] found raw type: Enumeration [javac] Enumeration enumer = r.getCurrentLoggers(); [javac] ^ [javac] missing type arguments for generic class Enumeration<E> [javac] where E is a type-variable: [javac] E extends Object declared in interface Enumeration [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\server\admin\AdminServer.java:33: warning: [serial] serializable class AdminServerException has no definition of serialVersionUID [javac] public class AdminServerException extends Exception { [javac] ^ [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\server\admin\JettyAdminServer.java:142: warning: [serial] serializable class JettyAdminServer.CommandServlet has no definition of serialVersionUID [javac] private class CommandServlet extends HttpServlet { [javac] ^ [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\server\util\KerberosUtil.java:39: warning: [rawtypes] found raw type: Class [javac] getInstanceMethod = classRef.getMethod("getInstance", new Class[0]); [javac] ^ [javac] missing type arguments for generic class Class<T> [javac] where T is a type-variable: [javac] T extends Object declared in class Class [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\server\util\KerberosUtil.java:42: warning: [rawtypes] found raw type: Class [javac] new Class[0]); [javac] ^ [javac] missing type arguments for generic class Class<T> [javac] where T is a type-variable: [javac] T extends Object declared in class Class [javac] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\server\util\OSMXBean.java:89: warning: [rawtypes] found raw type: Class [javac] new Class[0]); [javac] ^ [javac] missing type arguments for generic class Class<T> [javac] where T is a type-variable: [javac] T extends Object declared in class Class [javac] 10 warnings {noformat} *javadoc warning* {noformat} [javadoc] D:\gitHome\zookeeperTrunk\src\java\main\org\apache\zookeeper\server\PurgeTxnLog.java:172: warning - @return tag has no arguments. {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 17 weeks, 4 days ago | 0|i2or27: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2328 | Connection broken for id 1 |
Bug | Resolved | Major | Not A Problem | Unassigned | Arun | Arun | 20/Nov/15 18:48 | 19/Apr/16 10:18 | 19/Apr/16 10:18 | 3.4.6 | quorum | 0 | 4 | zookeeper-3.4.6 | Hi, We have a 3 node zookeeper running with 3.4.6 version. myId 1 (leader) and 2 (follower) are working fine, myId 3 node starts successfully but when we check the status, we see below error, we also do not see this instance taking a load Any help will be highly appreciated. $./zkServer.sh status JMX enabled by default Using config: ../zookeeper/zookeeper-3.4.6/conf/zoo.cfg Error contacting service. It is probably not running. Server Logs: 2015-11-20 23:38:41,863 [myid:3] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@780] - Connection broken for id 1, my id = 3, error = java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:189) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.net.SocketInputStream.read(SocketInputStream.java:203) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:765) 2015-11-20 23:38:41,863 [myid:3] - WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@780] - Connection broken for id 2, my id = 3, error = java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:189) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.net.SocketInputStream.read(SocketInputStream.java:203) at java.io.DataInputStream.readInt(DataInputStream.java:387) 2015-11-20 23:23:33,320 [myid:] - INFO [main:QuorumPeerConfig@103] - Reading configuration from: ../zookeeper/zookeeper-3.4.6/conf/zoo.cfg 2015-11-20 23:23:33,344 [myid:] - INFO [main:QuorumPeerConfig@340] - Defaulting to majority quorums 2015-11-20 23:23:33,351 [myid:3] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2015-11-20 23:23:33,352 [myid:3] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2015-11-20 23:23:33,353 [myid:3] - INFO [main:DatadirCleanupManager@101] - Purge task is not scheduled. 2015-11-20 23:23:33,382 [myid:3] - INFO [main:QuorumPeerMain@127] - Starting quorum peer 2015-11-20 23:23:33,410 [myid:3] - INFO [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181 2015-11-20 23:23:33,452 [myid:3] - INFO [main:QuorumPeer@959] - tickTime set to 2000 2015-11-20 23:23:33,452 [myid:3] - INFO [main:QuorumPeer@979] - minSessionTimeout set to -1 2015-11-20 23:23:33,453 [myid:3] - INFO [main:QuorumPeer@990] - maxSessionTimeout set to -1 2015-11-20 23:23:33,453 [myid:3] - INFO [main:QuorumPeer@1005] - initLimit set to 5 2015-11-20 23:23:33,493 [myid:3] - INFO [Thread-1:QuorumCnxManager$Listener@504] - My election bind port: <host_name>/<IP_address>:3888 2015-11-20 23:23:33,512 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING 2015-11-20 23:23:33,515 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id = 3, proposed zxid=0x0 2015-11-20 23:23:33,528 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2015-11-20 23:23:33,731 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@849] - Notification time out: 400 2015-11-20 23:23:33,732 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2015-11-20 23:23:34,136 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@849] - Notification time out: 800 2015-11-20 23:23:34,137 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2015-11-20 23:23:34,938 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@849] - Notification time out: 1600 2015-11-20 23:23:34,939 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2015-11-20 23:23:36,540 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@849] - Notification time out: 3200 2015-11-20 23:23:36,540 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2015-11-20 23:23:39,741 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@849] - Notification time out: 6400 2015-11-20 23:23:39,742 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 48 weeks, 2 days ago | 0|i2oqi7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2327 | "ConnectionLoss for /dog" |
Bug | Open | Major | Unresolved | Unassigned | Andrew Pennebaker | Andrew Pennebaker | 19/Nov/15 20:16 | 12/Dec/16 03:36 | 3.5.0 | 3.3.6, 3.5.1 | 0 | 3 | Zookeeper v3.5.0-alpha experiences an error when trying to create a simple data node. Versions before and after 3.5.0-alpha work just fine, but this specific version oddly fails to create data nodes. Source: https://github.com/mcandre/docker-zookeeper/tree/3.5.0-alpha Trace: ``` $ git clone git@github.com:mcandre/docker-zookeeper.git $ cd docker-zookeeper.git $ git checkout 3.5.0-alpha $ make CONTAINER=$(docker run -d -p 2181:2181 -p 2888:2888 -p 3888:3888 mcandre/docker-zookeeper:3.5.0-alpha) docker exec $CONTAINER sh -c 'echo "create /dog moon" | zkCli.sh' Connecting to localhost:2181 2015-11-20 01:05:54,951 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.0-alpha-1615249, built on 08/01/2014 22:13 GMT 2015-11-20 01:05:54,958 [myid:] - INFO [main:Environment@109] - Client environment:host.name=190002648bbc 2015-11-20 01:05:54,958 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.7.0_85 2015-11-20 01:05:54,969 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation 2015-11-20 01:05:54,970 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre 2015-11-20 01:05:54,970 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/zookeeper-3.5.0-alpha/bin/../build/classes:/zookeeper-3.5.0-alpha/bin/../build/lib/*.jar:/zookeeper-3.5.0-alpha/bin/../lib/slf4j-log4j12-1.7.5.jar:/zookeeper-3.5.0-alpha/bin/../lib/slf4j-api-1.7.5.jar:/zookeeper-3.5.0-alpha/bin/../lib/servlet-api-2.5-20081211.jar:/zookeeper-3.5.0-alpha/bin/../lib/netty-3.7.0.Final.jar:/zookeeper-3.5.0-alpha/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.5.0-alpha/bin/../lib/jline-2.11.jar:/zookeeper-3.5.0-alpha/bin/../lib/jetty-util-6.1.26.jar:/zookeeper-3.5.0-alpha/bin/../lib/jetty-6.1.26.jar:/zookeeper-3.5.0-alpha/bin/../lib/javacc.jar:/zookeeper-3.5.0-alpha/bin/../lib/jackson-mapper-asl-1.9.11.jar:/zookeeper-3.5.0-alpha/bin/../lib/jackson-core-asl-1.9.11.jar:/zookeeper-3.5.0-alpha/bin/../lib/commons-cli-1.2.jar:/zookeeper-3.5.0-alpha/bin/../zookeeper-3.5.0-alpha.jar:/zookeeper-3.5.0-alpha/bin/../src/java/lib/*.jar:/zookeeper-3.5.0-alpha/bin/../conf: 2015-11-20 01:05:54,971 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 2015-11-20 01:05:54,972 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp 2015-11-20 01:05:54,972 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA> 2015-11-20 01:05:54,973 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux 2015-11-20 01:05:54,973 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64 2015-11-20 01:05:54,974 [myid:] - INFO [main:Environment@109] - Client environment:os.version=4.0.9-boot2docker 2015-11-20 01:05:54,974 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root 2015-11-20 01:05:54,974 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root 2015-11-20 01:05:54,975 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/ 2015-11-20 01:05:54,975 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=26MB 2015-11-20 01:05:54,977 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=247MB 2015-11-20 01:05:54,977 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=30MB 2015-11-20 01:05:54,987 [myid:] - INFO [main:ZooKeeper@709] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@3c407d5 Welcome to ZooKeeper! 2015-11-20 01:05:55,062 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1093] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2015-11-20 01:05:55,099 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@963] - Socket connection established to localhost/127.0.0.1:2181, initiating session JLine support is enabled 2015-11-20 01:05:55,154 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1209] - Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect [zk: localhost:2181(CONNECTING) 0] create /dog moon Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /dog at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1067) at org.apache.zookeeper.cli.CreateCommand.exec(CreateCommand.java:78) at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:670) at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:573) at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:356) at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:316) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:276) make: *** [run] Error 1 ``` |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 51 weeks, 6 days ago | 0|i2oofr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2326 | Include connected server address:port in log |
Improvement | Closed | Minor | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 18/Nov/15 17:23 | 21/Jul/16 16:18 | 05/Dec/15 16:46 | 3.5.2, 3.6.0 | java client | 0 | 3 | Currently ZooKeeper client log contains blank myid as bellow {noformat} 2015-11-18 23:46:39,045 [myid:] - INFO [main-SendThread(192.168.1.3:2183):ClientCnxn$SendThread@1138] - Opening socket connection to server 192.168.1.3/192.168.1.3:2183. Will attempt to SASL-authenticate using Login Context section 'Client' 2015-11-18 23:46:40,387 [myid:] - WARN [main-SendThread(192.168.1.3:2183):ClientCnxn$SendThread@1206] - Client session timed out, have not heard from server in 1499ms for sessionid 0x200009eb6510002 2015-11-18 23:46:40,387 [myid:] - INFO [main-SendThread(192.168.1.3:2183):ClientCnxn$SendThread@1254] - Client session timed out, have not heard from server in 1499ms for sessionid 0x200009eb6510002, closing socket connection and attempting reconnect 2015-11-18 23:46:41,323 [myid:] - INFO [main-SendThread(192.168.1.3:2181):ZooKeeperSaslClient@235] - Client will use DIGEST-MD5 as SASL mechanism. {noformat} myid is blank. it is fine as at client side myid(serverid ) does not make any sense. But we can assign myid serverIP: port which will be a very helpful information in analysing the issues. So we after fix we can have log as bellow {noformat} 2015-11-19 03:51:27,254 [myid:192.168.1.3:2183] - INFO [main-SendThread(192.168.1.3:2183):Login@290] - successfully logged in. 2015-11-19 03:51:27,270 [myid:192.168.1.3:2183] - INFO [Thread-0:Login$1@124] - TGT refresh thread started. 2015-11-19 03:51:27,270 [myid:192.168.1.3:2183] - INFO [main-SendThread(192.168.1.3:2183):ZooKeeperSaslClient$1@297] - Client will use GSSAPI as SASL mechanism. 2015-11-19 03:51:27,270 [myid:192.168.1.3:2183] - INFO [Thread-0:Login@298] - TGT valid starting at: Thu Nov 19 03:51:27 IST 2015 2015-11-19 03:51:27,270 [myid:192.168.1.3:2183] - INFO [Thread-0:Login@299] - TGT expires: Thu Nov 19 03:53:27 IST 2015 2015-11-19 03:51:27,270 [myid:192.168.1.3:2183] - INFO [Thread-0:Login$1@178] - TGT refresh sleeping until: Thu Nov 19 03:53:05 IST 2015 2015-11-19 03:51:27,285 [myid:192.168.1.3:2183] - INFO [main-SendThread(192.168.1.3:2183):ClientCnxn$SendThread@1141] - Opening socket connection to server 192.168.1.3/192.168.1.3:2183. Will attempt to SASL-authenticate using Login Context section 'Client' 2015-11-19 03:51:27,301 [myid:192.168.1.3:2183] - INFO [main-SendThread(192.168.1.3:2183):ClientCnxn$SendThread@981] - Socket connection established, initiating session, client: /192.168.1.2:53117, server: 192.168.1.3/192.168.1.3:2183 2015-11-19 03:51:28,632 [myid:192.168.1.3:2183] - WARN [main-SendThread(192.168.1.3:2183):ClientCnxn$SendThread@1209] - Client session timed out, have not heard from server in 1333ms for sessionid 0x0 2015-11-19 03:51:28,632 [myid:192.168.1.3:2183] - INFO [main-SendThread(192.168.1.3:2183):ClientCnxn$SendThread@1257] - Client session timed out, have not heard from server in 1333ms for sessionid 0x0, closing socket connection and attempting reconnect 2015-11-19 03:51:29,147 [myid:192.168.1.3:2181] - INFO [main-SendThread(192.168.1.3:2181):ZooKeeperSaslClient$1@297] - Client will use GSSAPI as SASL mechanism. 2015-11-19 03:51:29,152 [myid:192.168.1.3:2181] - INFO [main-SendThread(192.168.1.3:2181):ClientCnxn$SendThread@1141] - Opening socket connection to server 192.168.1.3/192.168.1.3:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 2015-11-19 03:51:29,154 [myid:192.168.1.3:2181] - INFO [main-SendThread(192.168.1.3:2181):ClientCnxn$SendThread@981] - Socket connection established, initiating session, client: /192.168.1.2:53118, server: 192.168.1.3/192.168.1.3:2181 2015-11-19 03:51:30,487 [myid:192.168.1.3:2181] - WARN [main-SendThread(192.168.1.3:2181):ClientCnxn$SendThread@1209] - Client session timed out, have not heard from server in 1333ms for sessionid 0x0 {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 15 weeks, 5 days ago | 0|i2oluv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2325 | Data inconsistency if all snapshots empty or missing |
Bug | Resolved | Critical | Fixed | Andrew Grasso | Andrew Grasso | Andrew Grasso | 18/Nov/15 17:21 | 21/Aug/19 12:59 | 26/Oct/18 06:04 | 3.4.6 | 3.5.4, 3.6.0 | server | 1 | 15 | 3600 | 3000 | 600 | 16% | ZOOKEEPER-3513, ZOOKEEPER-3056 | When loading state from snapshots on startup, FileTxnSnapLog.java ignores the result of FileSnap.deserialize, which is -1L if no valid snapshots are found. Recovery proceeds with dt.lastProcessed == 0, its initial value. The result is that Zookeeper will process the transaction logs and then begin serving requests with a different state than the rest of the ensemble. To reproduce: In a healthy zookeeper cluster of size >= 3, shut down one node. Either delete all snapshots for this node or change all to be empty files. Restart the node. We believe this can happen organically if a node runs out of disk space. |
16% | 16% | 600 | 3000 | 3600 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 1 year, 20 weeks, 6 days ago | 0|i2oluf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2324 | Make Client authentication mechanism change optional |
Improvement | Open | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 18/Nov/15 16:56 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | java client | 0 | 1 | Currently if ZooKeeper Client fails to authenticate using kerberos GSSAPI mechanism it automatically switches to DIGEST-MD5 authentication mechanism. This should be configurable. whether to switch to DIGEST-MD5 or fail the authentication and throw {{LoginException}} | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 18 weeks, 1 day ago | 0|i2oltr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2323 | ZooKeeper client enters into infinite AuthFailedException cycle if its unable to recreate Kerberos ticket |
Bug | Resolved | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 18/Nov/15 15:29 | 11/Aug/16 16:00 | 11/Aug/16 16:00 | 3.4.7, 3.5.1 | 3.5.2 | java client | 1 | 5 | ZOOKEEPER-2139 | ZooKeeper client enters into infinite AuthFailedException cycle. For every operation its throws AuthFailedException Here is the create operation exception {code} org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /continuousRunningZKClient at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1753) {code} This can be reproduced easily with the following steps: # Reduce the ZooKeeper client principal max life for example set 2 min. use command {color:blue} modprinc -maxlife 2min zkcli {color} in kadmin. (This is done to reduce the issue reproduce time) # Connect Client to ZooKeeper quorum,let it gets connected and some operations are done successfully # Disconnect the Client's network, by pulling out the Ethernet cable or by any way. Now the Client is in disconnected state, no operation is expected,Client tries to reconnect to different-different servers in the ZooKeeper quorum. # After two minutes Client tries to get new Keberos ticket and it fails. # Connect the Client to network. Client comes in connected state but AuthFailedException for every operation. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 32 weeks ago | 0|i2olnj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2322 | Zookeeper server throws NullPointerExceptions while revalidating the session to the client |
Bug | Open | Minor | Unresolved | Unassigned | Neha Bathra | Neha Bathra | 16/Nov/15 06:38 | 16/Nov/15 06:38 | 0 | 2 | 3 nodes Suse 11 SP3 cluster | This happened with a long run scenario where the client is connected to Zk and leader re-election happens in some time interval StackTrace: 2015-11-05 03:15:42,953 [myid:1] - WARN [NIOWorkerThread-6:WorkerService$ScheduledWorkRequest@164] - Unexpected exception java.lang.NullPointerException at org.apache.zookeeper.server.quorum.LearnerZooKeeperServer.revalidateSession(LearnerZooKeeperServer.java:93) at org.apache.zookeeper.server.ZooKeeperServer.reopenSession(ZooKeeperServer.java:692) at org.apache.zookeeper.server.ZooKeeperServer.processConnectRequest(ZooKeeperServer.java:1039) at org.apache.zookeeper.server.NIOServerCnxn.readConnectRequest(NIOServerCnxn.java:434) at org.apache.zookeeper.server.NIOServerCnxn.readPayload(NIOServerCnxn.java:180) at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:340) at org.apache.zookeeper.server.NIOServerCnxnFactory$IOWorkRequest.doWork(NIOServerCnxnFactory.java:536) at org.apache.zookeeper.server.WorkerService$ScheduledWorkRequest.run(WorkerService.java:162) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 18 weeks, 3 days ago | 0|i2ofqn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2321 | C-client session watcher removal is not thread safe |
Bug | Open | Major | Unresolved | Unassigned | Hadriel Kaplan | Hadriel Kaplan | 13/Nov/15 19:12 | 13/Nov/15 19:12 | 3.5.1 | c client | 0 | 1 | Invoking the C-client API function {{zoo_set_watcher()}} to remove/change a session event watcher is not a thread-safe operation. The IO thread accesses the session watcher (the one stored in the zhandle_t.watcher member) and copies its value into completion events, which are then later processed by the completion thread. This happens when it's processing session events, such as session connected/connecting/expired events. Meanwhile after the value has been copied by the IO thread, but before the completion thread has used it, the main thread could change the watcher to NULL using {{zoo_set_watcher()}} because the calling application may be free'ing it. The call to {{zoo_set_watcher()}} will return even though the IO and completion threads still have the old watcher pointer value, and the main application cannot safely free it. But since the function call returns, the main application thinks it can free it, and boom goes the dynamite. So... either there needs to be a lockout while the IO/completion threads process session events, or the {{zoo_set_watcher()}} needs to become asynchronous itself by going through the same processing pipeline to the completion thread and having a completion callback to tell the calling application when it succeeded/failed. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 18 weeks, 6 days ago | 0|i2oe6f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2320 | C-client crashes when removing watcher asynchronously in "local" mode |
Bug | Patch Available | Major | Unresolved | Abraham Fine | Hadriel Kaplan | Hadriel Kaplan | 12/Nov/15 18:37 | 16/Dec/18 10:01 | 3.5.1 | c client | 0 | 5 | 0 | 4200 | ZOOKEEPER-1919 | The C-client library will crash when invoking the asynchronous {{zoo_aremove_watchers()}} API function with the '{{local}}' argument set to 1. The reason is: if the local argument is 1/true, then the code does '{{notify_sync_completion((struct sync_completion *)data);}}' But casting the '{{data}}' variable to a {{sync_completion}} struct pointer is bogus/invalid, and when it's later handles as that struct pointer it's accessing invalid memory. As a side note: it will work ok when called _synchronously_ through {{zoo_remove_watchers()}}, because that function creates a {{sync_completion}} struct and passes it to the asynch {{zoo_aremove_watchers()}}, but it will not work ok when the asynch function is used directly for the reason stated perviously. Another side note: the docs state that setting the 'local' flag makes the C-client remove the watcher "even if there is no server connection" - but really it makes the C-client remove the watcher without notifying the server at *all*, even if the connection to a server is up. (well... that's what it would do if it didn't just crash instead ;) |
100% | 100% | 4200 | 0 | pull-request-available, remove_watches | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 1 year, 17 weeks ago | 0|i2obxb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2319 | UnresolvedAddressException cause the QuorumCnxManager.Listener exit |
Bug | Resolved | Major | Fixed | Michael Han | Zhaohui Yu | Zhaohui Yu | 09/Nov/15 18:36 | 05/Jul/18 07:18 | 18/Jun/18 00:05 | 3.4.6 | 3.4.7, 3.5.0, 3.6.0 | 0 | 7 | 0 | 1200 | ZOOKEEPER-1506 | Given three nodes, the leader on 2, but some issue with this machine, so I shutdown this machine, and change the host name to another machine. Then I start the node in the new machine, but the new node can not join. I found the the 1 and 3's Listener thread exit. With the code of Listener's run method: I think we should catch UnresolvedAddressException to avoid the Listener exit. {noformat} @Override public void run() { while((!shutdown) && (numRetries < 3)){ try { // bind and accept receiveConnection(client); } catch (IOException e) { } } // } {noformat} |
100% | 100% | 1200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 37 weeks ago | 0|i2o63b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2318 | segfault in auth_completion_func |
Bug | Resolved | Major | Duplicate | Unassigned | Marshall McMullen | Marshall McMullen | 09/Nov/15 12:26 | 27/May/16 01:27 | 27/May/16 01:27 | 3.5.0 | c client | 1 | 5 | ZOOKEEPER-1485 | We have seen some sporadic issues with unexplained segfaults inside auth_completion_func. The interesting thing is we are not using any auth mechanism at all. This happened against this version of the code: svn.apache.org/repos/asf/zookeeper/trunk@1547702 Here's the stacktrace we are seeing: {code} Thread 1 (Thread 0x7f21d13ff700 ? (LWP 5230)): #0 0x00007f21efff42f0 in auth_completion_func (rc=0, zh=0x7f21e7470800) at src/zookeeper.c:1696 #1 0x00007f21efff7898 in zookeeper_process (zh=0x7f21e7470800, events=2) at src/zookeeper.c:2708 #2 0x00007f21f0006583 in do_io (v=0x7f21e7470800) at src/mt_adaptor.c:440 #3 0x00007f21eeab7e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #4 0x00007f21ed1803fd in clone () from /lib/x86_64-linux-gnu/libc.so.6 #5 0x0000000000000000 in ?? () {code} The offending line in our case is: 1696 LOG_INFO(LOGCALLBACK(zh), "Authentication scheme %s succeeded", zh->auth_h.auth->scheme); It must be the case that zh->auth_h.auth is NULL for this to happen since the code path returns if zh is NULL. Interesting log messages around this time: {code} Socket [10.170.243.7:2181] zk retcode=-2, errno=115(Operation now in progress): unexpected server response: expected 0xfffffff9, but received 0xfffffff8 Priming connection to [10.170.243.4:2181]: last_zxid=0x370eb4d initiated connection to server [10.170.243.4:2181] Oct 13 12:03:21.273384 zookeeper - INFO [NIOServerCxnFactory.AcceptThread:/10.170.243.4:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /10.170.243.4:48523 Oct 13 12:03:21.274321 zookeeper - WARN [NIOWorkerThread-24:ZooKeeperServer@822] - Connection request from old client /10.170.243.4:48523; will be dropped if server is in r-o mode Oct 13 12:03:21.274452 zookeeper - INFO [NIOWorkerThread-24:ZooKeeperServer@869] - Client attempting to renew session 0x3000011596d004a at /10.170.243.4:48523; client last zxid is 0x30370eb4d; server last zxid is 0x30370eb4d Oct 13 12:03:21.274584 zookeeper - INFO [NIOWorkerThread-24:Learner@115] - Revalidating client: 0x3000011596d004a session establishment complete on server [10.170.243.4:2181], sessionId=0x3000011596d004a, negotiated timeout=20000 Oct 13 12:03:21.275693 zookeeper - INFO [QuorumPeer[myid=1]/10.170.243.4:2181:ZooKeeperServer@611] - Established session 0x3000011596d004a with negotiated timeout 20000 for client /10.170.243.4:48523 Oct 13 12:03:24.229590 zookeeper - WARN [NIOWorkerThread-8:NIOServerCnxn@361] - Unable to read additional data from client sessionid 0x3000011596d004a, likely client has closed socket Oct 13 12:03:24.230018 zookeeper - INFO [NIOWorkerThread-8:NIOServerCnxn@999] - Closed socket connection for client /10.170.243.4:48523 which had sessionid 0x3000011596d004a Oct 13 12:03:24.230257 zookeeper - WARN [NIOWorkerThread-19:NIOServerCnxn@361] - Unable to read additional data from client sessionid 0x100002743aa0001, likely client has closed socket {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 42 weeks, 6 days ago | 0|i2o5ef: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2317 | Non-OSGi compatible version |
Bug | Closed | Blocker | Fixed | Sachin | Markus Tippmann | Markus Tippmann | 09/Nov/15 05:59 | 20/May/19 13:50 | 29/May/18 20:30 | 3.5.1 | 3.6.0, 3.5.5 | build | 0 | 9 | 0 | 1800 | Karaf OSGi container | Bundle cannot be deployed to OSGi container. Manifest version is not OSGi compatible. Instead of using 3.5.1-alpha, manifest needs to contain 3.5.1.alpha |
100% | 100% | 1800 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 42 weeks, 1 day ago | 0|i2o4tr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2316 | comment does not match code logic |
Bug | Resolved | Trivial | Fixed | Umesh Panchaksharaiah | sunhaitao | sunhaitao | 06/Nov/15 09:34 | 12/Jul/17 23:07 | 27/Apr/17 17:21 | 3.5.1 | 3.5.4, 3.6.0 | server | 0 | 7 | when i read the code below, the comment is put in an incorrect,place. " // in order to be committed, a proposal must be accepted by a quorum " should be place on top of : if (!p.hasAllQuorums()) { return false; } --------------------------------------------------------------------------------------- 3.5.1 Leader code // getting a quorum from all necessary configurations if (!p.hasAllQuorums()) { return false; } // commit proposals in order if (zxid != lastCommitted+1) { LOG.warn("Commiting zxid 0x" + Long.toHexString(zxid) + " from " + followerAddr + " not first!"); LOG.warn("First is " + (lastCommitted+1)); } // in order to be committed, a proposal must be accepted by a quorum outstandingProposals.remove(zxid); |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 47 weeks ago | 0|i2o293: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2315 | Change client connect zk service timeout log level from Info to Warn level |
Improvement | Closed | Minor | Fixed | Yiqun Lin | Yiqun Lin | Yiqun Lin | 05/Nov/15 22:29 | 21/Jul/16 16:18 | 08/Nov/15 18:13 | 3.4.6 | 3.4.7, 3.5.2, 3.6.0 | java client | 0 | 4 | Recently my the resourmanager of my hadoop cluster is fail suddenly,so I look into the rsourcemanager log.But the log is not helpful for me to direct find the reson until I found the zk timeout info log record. {code} 2015-11-06 06:34:11,257 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1446016482901_292094_01_000140 of capacity <memory:1024, vCores:1> on host mofa2089:41361, which has 30 containers, <memory:31744, vCores:30> used and <memory:9216, vCores:10> available after allocation 2015-11-06 06:34:11,266 INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x24f4fd5118e5c6e has expired, closing socket connection 2015-11-06 06:34:11,271 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1446016482901_292094_01_000105 Container Transitioned from RUNNING to COMPLETED 2015-11-06 06:34:11,271 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Completed container: container_1446016482901_292094_01_000105 in state: COMPLETED event:FINISHED 2015-11-06 06:34:11,271 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dongwei OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1446016482901_292094 CONTAINERID=container_1446016482901_292094_01_000105 2015-11-06 06:34:11,271 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1446016482901_292094_01_000105 of capacity <memory:1024, vCores:1> on host mofa010079:50991, which currently has 29 containers, <memory:33792, vCores:29> used and <memory:7168, vCores:11> available, release resources=true 2015-11-06 06:34:11,271 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Application attempt appattempt_1446016482901_292094_000001 released container container_1446016482901_292094_01_000105 on node: host: mofa010079:50991 #containers=29 available=<memory:7168, vCores:11> used=<memory:33792, vCores:29> with event: FINISHED 2015-11-06 06:34:11,272 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1446016482901_292094_01_000141 Container Transitioned from NEW to ALLOCATED 2015-11-06 06:34:11,272 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dongwei OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1446016482901_292094 CONTAINERID=container_1446016482901_292094_01_000141 2015-11-06 06:34:11,272 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1446016482901_292094_01_000141 of capacity <memory:1024, vCores:1> on host mofa010079:50991, which has 30 containers, <memory:34816, vCores:30> used and <memory:6144, vCores:10> available after allocation 2015-11-06 06:34:11,295 WARN org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher: org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning. 2015-11-06 06:34:11,296 INFO org.apache.hadoop.ipc.Server: Stopping server on 8032 2015-11-06 06:34:11,297 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2015-11-06 06:34:11,297 INFO org.apache.hadoop.ipc.Server: Stopping server on 8030 2015-11-06 06:34:11,297 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8032 2015-11-06 06:34:11,298 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2015-11-06 06:34:11,298 INFO org.apache.hadoop.ipc.Server: Stopping server on 8031 2015-11-06 06:34:11,298 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8030 2015-11-06 06:34:11,300 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 80312015-11-06 06:34:11,300 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder {code} The problem is solved,but it's too difficult to find the connect zk service time out info from so many info log records.And we will easily to ignore these records.So we should chang these zk seesion timeout log level form info level to warn. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 19 weeks, 3 days ago | 0|i2o1fj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2314 | Improvements to SASL |
Improvement | Open | Major | Unresolved | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 05/Nov/15 11:26 | 05/Feb/20 07:15 | 3.4.6, 3.5.1 | 3.7.0, 3.5.8 | documentation | 0 | 9 | ZOOKEEPER-2396, ZOOKEEPER-2397 | Points that occur to me right now: # The login object in ZooKeeperSaslClient is static, which means that if you try to create another client for tests, the login object will be the first one you've set for all runs. I've experienced this with 3.4.6. # There are a number of properties spread across the code that do not appear in the docs. For example, zookeeper.allowSaslFailedClients isn't documented afaict. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 2 weeks, 3 days ago | 0|i2o0fr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2313 | Refactor ZooKeeperServerBean and its subclasses (LeaderBean, ObserverBean, FollowerBean) |
Improvement | Open | Minor | Unresolved | Rabi Kumar K C | Edward Ribeiro | Edward Ribeiro | 04/Nov/15 05:23 | 05/Feb/20 07:15 | 3.4.6 | 3.7.0, 3.5.8 | server | 0 | 2 | 0 | 4200 | Following the work on ZOOKEEPER-2142, the goal of this ticket is to add a new constructor to ZooKeeperServerBean to pass the name as the parameter, so classes that extend from ZooKeeperServerBean shouldn't need to implement getName() method. Also, ObserverBean seems to be in the wrong package, it should be under server.quorum rather than just server. | 100% | 100% | 4200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 20 weeks, 1 day ago | 0|i2nxlr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2312 | Fix arrow direction in the 2-phase commit diagram in Zookeeper internal docs |
Bug | Open | Minor | Unresolved | Raju Bairishetti | Raju Bairishetti | Raju Bairishetti | 03/Nov/15 22:53 | 03/Nov/15 23:28 | documentation | 0 | 2 | https://zookeeper.apache.org/doc/r3.3.3/zookeeperInternals.html Leader issues *commit request* to followers once the ack received from the followers. But the 2-phase commit diagram shows the direction of commit from Follower to Leader. [2-phase-commit-image|https://github.com/apache/zookeeper/blob/trunk/src/docs/src/documentation/resources/images/2pc.jpg] |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 20 weeks, 1 day ago | 0|i2nx67: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2311 | assert in setup_random |
Bug | Closed | Major | Fixed | Marshall McMullen | Marshall McMullen | Marshall McMullen | 02/Nov/15 15:24 | 21/Jul/16 16:18 | 06/Dec/15 14:34 | 3.4.7, 3.5.1 | 3.4.8, 3.5.2, 3.6.0 | c client | 0 | 4 | We've started seeing an assert failing inside setup_random at line 537: {code} 528 static void setup_random() 529 { 530 #ifndef _WIN32 // TODO: better seed 531 int seed; 532 int fd = open("/dev/urandom", O_RDONLY); 533 if (fd == -1) { 534 seed = getpid(); 535 } else { 536 int rc = read(fd, &seed, sizeof(seed)); 537 assert(rc == sizeof(seed)); 538 close(fd); 539 } 540 srandom(seed); 541 srand48(seed); 542 #endif {code} The core files show: Program terminated with signal 6, Aborted. #0 0x00007f9ff665a0d5 in raise () from /lib/x86_64-linux-gnu/libc.so.6 #0 0x00007f9ff665a0d5 in raise () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x00007f9ff665d83b in abort () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x00007f9ff6652d9e in ?? () from /lib/x86_64-linux-gnu/libc.so.6 #3 0x00007f9ff6652e42 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6 #4 0x00007f9ff8e4070a in setup_random () at src/zookeeper.c:476 #5 0x00007f9ff8e40d76 in resolve_hosts (zh=0x7f9fe14de400, hosts_in=0x7f9fd700f400 "10.26.200.6:2181,10.26.200.7:2181,10.26.200.8:2181", avec=0x7f9fd87fab60) at src/zookeeper.c:730 #6 0x00007f9ff8e40e87 in update_addrs (zh=0x7f9fe14de400) at src/zookeeper.c:801 #7 0x00007f9ff8e44176 in zookeeper_interest (zh=0x7f9fe14de400, fd=0x7f9fd87fac4c, interest=0x7f9fd87fac50, tv=0x7f9fd87fac80) at src/zookeeper.c:1980 #8 0x00007f9ff8e553f5 in do_io (v=0x7f9fe14de400) at src/mt_adaptor.c:379 #9 0x00007f9ff804de9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #10 0x00007f9ff671738d in clone () from /lib/x86_64-linux-gnu/libc.so.6 #11 0x0000000000000000 in ?? () I'm not sure what the underlying cause of this is... But POSIX always allows for a short read(2), and any program MUST check for short reads... Has anyone else encountered this issue? We are seeing it rather frequently which is concerning. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 15 weeks, 4 days ago | 0|i2nua7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2310 | Snapshot files must be synced to prevent inconsistency or data loss |
Bug | Patch Available | Major | Unresolved | Abhishek Rai | Abhishek Rai | Abhishek Rai | 01/Nov/15 01:26 | 11/Sep/16 04:19 | 3.4.6 | server | 0 | 6 | Today, Zookeeper server syncs transaction log files to disk by default, but does not sync snapshot files. Consequently, an untimely crash may result in a lost or incomplete snapshot file. During recovery, if the server finds a valid older snapshot file, it will load it and replay subsequent log(s), skipping the incomplete snapshot file. It's possible that the skipped file had some transactions which are not present in the replayed transaction logs. Since quorum synchronization is based on last transaction ID of each server, this will never get noticed, resulting in inconsistency between servers and possible data loss. Following sequence of events describes a sample scenario where this can happen: # Server F is a follower in a Zookeeper ensemble. # F's most recent valid snapshot file is named "snapshot.10" containing state up to zxid = 10. F is currently writing to the transaction log file "log.11", with the most recent zxid = 20. # Fresh round of election. # F receives a few new transactions 21 to 30 from new leader L as the "diff". Current server behavior is to dump current state plus diff to a new snapshot file, "snapshot.30". # F finalizes the snapshot file, but file contents are still buffered in OS caches. Zookeeper does not sync snapshot file contents to disk. # F receives a new transaction 31 from the leader, which it appends to the existing transaction log file, "log.11" and syncs the file to disk. # Server machine crashes or is cold rebooted. # After recovery, snapshot file "snapshot.30" may not exist or may be empty. See below for why that may happen. # In either case, F looks for the last finalized snapshot file, finds and loads "snapshot.10". It then replays transactions from "log.11". Ultimately, its last seen zxid will be 31, but it would not have replayed transactions 21 to 30 received via the "diff" from the leader. # Clients which are connected to F may see different data than clients connected to other members of the ensemble, violating single system image invariant. Also, if F were to become a leader at some point, it could use its state to seed other servers, and they all could lose the writes in the missing interval above. *Notes:* - Reason why snapshot file may be missing or incomplete: -- Zookeeper does not sync the data directory after creating a snapshot file. Even if a newly created file is synced to disk, if the corresponding directory entry is not, then the file will not be visible in the namespace. -- Zookeeper does not sync snapshot files. So, they may be empty or incomplete during recovery from an untimely crash. - In step (6) above, the server could also have written the new transaction 31 to a new log file, "log.31". The final outcome would still be the same. We are able to deterministically reproduce this problem using the following steps: # Create a new Zookeeper ensemble on 3 hosts: A, B, and C. # Ensured each server has at least one snapshot file in its data dir. # Stop Zookeeper process on server A. # Slow down disk syncs on server A (see example script below). This ensures that snapshot files written by Zookeeper don't make it to disk spontaneously. Log files will be written to disk as Zookeeper explicitly issues a sync call on such files. # Connect to server B and create a new znode /test1. # Start Zookeeper process on A, wait for it to write a new snapshot to its datadir. This snapshot would contain /test1 but it won’t be synced to disk yet. # Connect to A and verify that /test1 is visible. # Connect to B and create another znode /test2. This will cause A’s transaction log to grow further to receive /test2. # Cold reboot A. # A’s last snapshot is a zero-sized file or is missing altogether since it did not get synced to disk before reboot. We have seen both in different runs. # Connect to A and verify that /test1 does not exist. It exists on B and C. Slowing down disk syncs: {noformat} echo 360000 | sudo tee /proc/sys/vm/dirty_writeback_centisecs echo 360000 | sudo tee /proc/sys/vm/dirty_expire_centisecs echo 99 | sudo tee /proc/sys/vm/dirty_background_ratio echo 99 | sudo tee /proc/sys/vm/dirty_ratio {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 27 weeks, 4 days ago | 0|i2nsfb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2309 | TestClient fails |
Bug | Resolved | Blocker | Not A Problem | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 30/Oct/15 11:07 | 30/Oct/15 12:09 | 30/Oct/15 12:09 | 0 | 1 | I'm getting this out of a fresh copy of branch-3.4. {noformat} tests/TestClient.cc:375: Assertion: equality assertion failed [Expected: -101, Actual : -4] tests/TestClient.cc:300: Assertion: assertion failed [Expression: ctx.waitForConnected(zk)] Failures !!! {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 20 weeks, 6 days ago | 0|i2nqtj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2308 | Warning messages in the zookeeper logs |
Bug | Open | Major | Unresolved | Unassigned | Ramesh Gopal | Ramesh Gopal | 29/Oct/15 12:57 | 29/Oct/15 12:57 | 1 | 3 | telnet localhost 2181 gives the following warning messages. 2015-07-08 09:26:13,785 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@347] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x34e6a8473084e8d, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:218) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:853) 2015-07-08 09:26:13,785 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@347] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x34e6a8473084e7f, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:218) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:853) 2015-07-08 09:26:13,813 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@347] - caught end of stream exceptionEndOfStreamException: Unable to read additional data from client sessionid 0x34e6a8473084e8b, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:218) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:853) 2015-07-08 09:26:13,963 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@347] - caught end of stream exceptionEndOfStreamException: Unable to read additional data from client sessionid 0x34e6a8473084e80, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:218) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:853) 2015-07-08 09:26:13,980 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@347] - caught end of stream exceptionEndOfStreamException: Unable to read additional data from client sessionid 0x34e6a8473084e81, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:218) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:853) 2015-07-08 09:26:13,982 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@347] - caught end of stream exceptionEndOfStreamException: Unable to read additional data from client sessionid 0x34e6a8473084e7c, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:218) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:853) 2015-07-08 09:26:18,453 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@347] - caught end of stream exceptionEndOfStreamException: Unable to read additional data from client sessionid 0x34e6a8473084e84, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:218) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:853) |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 21 weeks ago | 0|i2np7b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2307 | ZooKeeper not starting because acceptedEpoch is less than the currentEpoch |
Bug | Resolved | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 29/Oct/15 11:12 | 03/Feb/20 05:35 | 17/Dec/19 07:48 | 3.6.0 | server | 1 | 12 | 0 | 7800 | ZOOKEEPER-2660, ZOOKEEPER-2162 | This issue occurred in one of our test environment where disk was being changed to read only very frequently. The the scenario is as follows: # Configure three node ZooKeeper cluster, lets say nodes are A, B and C # Start A and B. Both A and B start successfully, quorum is running. # Start C, because of IO error C fails to update acceptedEpoch file. But C also starts successfully, joins the quorum as follower # Stop C # Start C, bellow exception with message "The accepted epoch, 0 is less than the current epoch, 1" is thrown {code} 2015-10-29 16:52:32,942 [myid:3] - ERROR [main:QuorumPeer@784] - Unable to load database on disk java.io.IOException: The accepted epoch, 0 is less than the current epoch, 1 at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:781) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:720) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:202) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:139) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:88) 2015-10-29 16:52:32,946 [myid:3] - ERROR [main:QuorumPeerMain@111] - Unexpected exception, exiting abnormally java.lang.RuntimeException: Unable to run quorum server at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:785) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:720) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:202) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:139) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:88) Caused by: java.io.IOException: The accepted epoch, 0 is less than the current epoch, 1 at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:781) {code} |
100% | 100% | 7800 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 13 weeks, 2 days ago | 0|i2nozz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2306 | Remove file delete duplicate code from test code |
Improvement | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 28/Oct/15 03:41 | 21/Jul/16 16:18 | 06/Dec/15 15:32 | 3.5.2, 3.6.0 | tests | 0 | 3 | Code to delete a folder recursive is written in multiple test files. Following are the files containing the same piece of code. {code} src/java/systest/org/apache/zookeeper/test/system/QuorumPeerInstance.java src/java/test/org/apache/zookeeper/test/ClientBase.java src/java/test/org/apache/zookeeper/server/quorum/LearnerTest.java src/java/test/org/apache/zookeeper/server/quorum/Zab1_0Test.java {code} Remove duplicate code from these files |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 15 weeks, 4 days ago | 0|i2nm8v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2305 | created |
Bug | Open | Major | Unresolved | Unassigned | shengfeng | shengfeng | 27/Oct/15 07:48 | 28/Oct/15 01:45 | 3.4.6 | quorum | 0 | 1 | deleteChild method in Pathtrie class, childNode.getChildren().length == 1, why not childNode.getChildren().length == 0 ? | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 21 weeks, 2 days ago | 0|i2nklr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2304 | JMX ClientPort from ZooKeeperServerBean incorrect |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 26/Oct/15 07:47 | 21/Jul/16 16:18 | 06/Dec/15 15:39 | 3.5.2 | jmx | 0 | 4 | The "ClientPort" property of {{org.apache.zookeeper.server.ZooKeeperServerBean}} returns incorrect value. It includes address also like 192.168.1.2:2183. It should return only port | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 15 weeks, 4 days ago | 0|i2nio7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2303 | Zookeeper fails to compile (mesos driver) on Raspberry Pi2 |
Bug | Resolved | Critical | Not A Problem | Steven Fisher | Steven Fisher | Steven Fisher | 25/Oct/15 11:46 | 27/Oct/15 12:18 | 27/Oct/15 12:17 | 3.4.3, 3.4.6 | build | 0 | 2 | Raspberry Pi - Linux rpi2 3.18.6-v7+ #3 SMP PREEMPT Mon Feb 9 15:39:54 UTC 2015 armv7l GNU/Linux |
Trying to compile mesos on Raspberry Pi 2. Zookeeper builds ok with ant directly (Java only), but when using make for mesos it trys to compile the C code and this fails with the error: libtool: compile: gcc -DHAVE_CONFIG_H -I. -I./include -I./tests -I./generated -DTHREADED -g -O2 -D_GNU_SOURCE -MT libzkmt_la-mt_adaptor.lo -MD -MP -MF .deps/libzkmt_la-mt_adaptor.Tpo -c src/mt_adaptor.c -fPIC -DPIC -o libzkmt_la-mt_adaptor.o /tmp/ccw07Ju5.s: Assembler messages: /tmp/ccw07Ju5.s:1515: Error: bad instruction `lock xaddl r1,[r0]' Makefile:823: recipe for target 'libzkmt_la-mt_adaptor.lo' failed make[5]: *** [libzkmt_la-mt_adaptor.lo] Error 1 The memos release comes with 3.4.5 but I have also tried 3.4.6 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 21 weeks, 2 days ago | 0|i2nhtz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2302 | Some test cases are not running because wrongly named |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 25/Oct/15 06:52 | 21/Jul/16 16:18 | 28/Oct/15 17:58 | 3.5.2, 3.6.0 | tests | 0 | 5 | When run ZooKeeper test cases following two test classes never run because wrong naming convention is followed. {code} org.apache.zookeeper.server.quorum.TestQuorumPeerConfig org.apache.zookeeper.server.quorum.TestRemotePeerBean {code} Name of these test classes should be changed to {code} org.apache.zookeeper.server.quorum.QuorumPeerConfigTest org.apache.zookeeper.server.quorum.RemotePeerBeanTest {code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 21 weeks, 1 day ago | 0|i2nhof: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2301 | QuorumPeer does not listen on passed client IP in the constructor |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 25/Oct/15 05:44 | 21/Jul/16 16:18 | 07/Dec/15 00:01 | 3.5.2 | server | 0 | 4 | ZOOKEEPER-2299 | QuorumPeer does not listen on passed client IP in the constructor, for client connection. It always listens on all IP(0.0.0.0 or 0:0:0:0:0:0:0:0). This happens only when QuorumPeer is created using any of the bellow constructors {code} org.apache.zookeeper.server.quorum.QuorumPeer.QuorumPeer(Map<Long,QuorumServer> quorumPeers, File snapDir, File logDir, int clientPort, int electionAlg, long myid, int tickTime, int initLimit, int syncLimit) {code} {code} org.apache.zookeeper.server.quorum.QuorumPeer.QuorumPeer(Map<Long,QuorumServer> quorumPeers, File snapDir, File logDir, int clientPort, int electionAlg, long myid, int tickTime, int initLimit, int syncLimit, QuorumVerifier quorumConfig) {code} |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 15 weeks, 3 days ago | 0|i2nhmv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2300 | Expose SecureClientPort and SecureClientAddress JMX properties |
Improvement | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 25/Oct/15 03:07 | 21/Jul/16 16:18 | 07/Dec/15 00:28 | 3.5.0 | 3.5.2 | jmx | 0 | 3 | ZooKeeper currently exposes ClientPort and ClientAddress JMX properties. Same way we should expose SecureClientPort and SecureClientAddress. The values for these two properties will be decided based on the configured value of secureClientPort and secureClientPortAddress The value of SecureClientPort will be: secureClientPort {color:blue}(if secureClientPort is configured){color} empty string {color:blue}(If secureClientPort is not configured){color} The value of SecureClientAddress will be: secureClientPortAddress:secureClientPort {color:blue}(if both secureClientPort and secureClientPortAddress are configured){color} 0.0.0.0:secureClientPort or 0:0:0:0:0:0:0:0:secureClientPort {color:blue}(if only secureClientPort is configured){color} empty string {color:blue}(If secureClientPort is not configured){color} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 15 weeks, 3 days ago | 0|i2nhkn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2299 | NullPointerException in LocalPeerBean for ClientAddress |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 22/Oct/15 05:28 | 21/Jul/16 16:18 | 07/Dec/15 00:08 | 3.5.2, 3.6.0 | jmx | 0 | 6 | ZOOKEEPER-2301 | When clientPortAddress is not configured LocalPeerBean throws NullPointerException. *Expected Behavior:* # When only clientPort is configured ClientAddress value should be 0.0.0.0:clientPort or 0:0:0:0:0:0:0:0:clientPort # When both clientPort clientPortAddress are configured then expected value is clientPortAddress:clientPort |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 15 weeks, 3 days ago | 0|i2nddb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2298 | zookeeper: Should retry on EAI_NONAME return from getaddrinfo() |
Bug | Resolved | Minor | Invalid | Unassigned | Neil Conway | Neil Conway | 22/Oct/15 00:31 | 22/Oct/15 00:33 | 22/Oct/15 00:33 | 0 | 1 | The zookeeper interface is designed to retry (once per second for up to ten minutes) if one or more of the Zookeeper hostnames can't be resolved (see [MESOS-1326] and [MESOS-1523]). However, the current implementation assumes that a DNS resolution failure is indicated by zookeeper_init() returning NULL and errno being set to EINVAL (Zk translates getaddrinfo() failures into errno values). However, the current Zk code does: {code} static int getaddrinfo_errno(int rc) { switch(rc) { case EAI_NONAME: // ZOOKEEPER-1323 EAI_NODATA and EAI_ADDRFAMILY are deprecated in FreeBSD. #if defined EAI_NODATA && EAI_NODATA != EAI_NONAME case EAI_NODATA: #endif return ENOENT; case EAI_MEMORY: return ENOMEM; default: return EINVAL; } } {code} getaddrinfo() returns EAI_NONAME when "the node or service is not known"; per discussion in [MESOS-2186], this seems to happen intermittently due to DNS failures. Proposed fix: looking at errno is always going to be somewhat fragile, but if we're going to continue doing that, we should check for ENOENT as well as EINVAL. |
mesosphere | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 22 weeks ago | 0|i2nd0f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2297 | NPE is thrown while creating "key manager" and "trust manager" |
Bug | Closed | Blocker | Fixed | Mohammad Arshad | Anushri | Anushri | 19/Oct/15 07:38 | 21/Jul/16 16:18 | 23/Jun/16 13:43 | 3.5.1 | 3.5.2, 3.6.0 | server | 0 | 10 | Suse 11 sp 3 | NPE is thrown while creating "key manager" and "trust manager" , even though the zk setup is in non-secure mode bq. 2015-10-19 12:54:12,278 [myid:2] - ERROR [ProcessThread(sid:2 cport:-1)::X509AuthenticationProvider@78] - Failed to create key manager bq. org.apache.zookeeper.common.X509Exception$KeyManagerException: java.lang.NullPointerException at org.apache.zookeeper.common.X509Util.createKeyManager(X509Util.java:129) at org.apache.zookeeper.server.auth.X509AuthenticationProvider.<init>(X509AuthenticationProvider.java:75) at org.apache.zookeeper.server.auth.ProviderRegistry.initialize(ProviderRegistry.java:42) at org.apache.zookeeper.server.auth.ProviderRegistry.getProvider(ProviderRegistry.java:68) at org.apache.zookeeper.server.PrepRequestProcessor.fixupACL(PrepRequestProcessor.java:952) at org.apache.zookeeper.server.PrepRequestProcessor.pRequest2Txn(PrepRequestProcessor.java:379) at org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProcessor.java:716) at org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:144) Caused by: java.lang.NullPointerException at org.apache.zookeeper.common.X509Util.createKeyManager(X509Util.java:113) ... 7 more bq. 2015-10-19 12:54:12,279 [myid:2] - ERROR [ProcessThread(sid:2 cport:-1)::X509AuthenticationProvider@90] - Failed to create trust manager bq. org.apache.zookeeper.common.X509Exception$TrustManagerException: java.lang.NullPointerException at org.apache.zookeeper.common.X509Util.createTrustManager(X509Util.java:158) at org.apache.zookeeper.server.auth.X509AuthenticationProvider.<init>(X509AuthenticationProvider.java:87) at org.apache.zookeeper.server.auth.ProviderRegistry.initialize(ProviderRegistry.java:42) at org.apache.zookeeper.server.auth.ProviderRegistry.getProvider(ProviderRegistry.java:68) at org.apache.zookeeper.server.PrepRequestProcessor.fixupACL(PrepRequestProcessor.java:952) at org.apache.zookeeper.server.PrepRequestProcessor.pRequest2Txn(PrepRequestProcessor.java:379) at org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProcessor.java:716) at org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:144) Caused by: java.lang.NullPointerException at org.apache.zookeeper.common.X509Util.createTrustManager(X509Util.java:143) ... 7 more |
9223372036854775807 | No Perforce job exists for this issue. | 8 | 9223372036854775807 | 3 years, 39 weeks ago | 0|i2n6nz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2296 | compilation broken for 3.4 |
Bug | Resolved | Blocker | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 18/Oct/15 17:08 | 21/Oct/15 18:12 | 21/Oct/15 18:12 | 3.4.7 | 0 | 3 | Apparently, ZOOKEEPER-2253 wasn't fully backported from trunk so it doesn't compile now. We should make sure jenkins runs for 3.4, to catch these issues in the future. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 22 weeks, 1 day ago |
Reviewed
|
0|i2n61b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2295 | TGT refresh time logic is wrong |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 17/Oct/15 02:32 | 21/Jul/16 16:18 | 08/Dec/15 23:11 | 3.4.7, 3.5.1 | 3.4.8, 3.5.2, 3.6.0 | 0 | 4 | When Kerberos is used as authentication mechanism some time TGT is getting expired because it is not refreshed timely. The scenario is as follow: suppose now=8 (the current milliseconds) next refresh time= 10 TGT expire time= 9 *Current behaviour:* Error is logged and TGT refresh thread exits. *Expected behaviour:* TGT should be refreshed immediately(now) instead of nextRefreshTime |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 15 weeks, 1 day ago | 0|i2n5db: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2294 | Ant target generate-clover-reports is broken |
Bug | Closed | Major | Fixed | Charlie Helin | Charlie Helin | Charlie Helin | 14/Oct/15 15:50 | 21/Jul/16 16:18 | 03/Mar/16 12:27 | 3.4.9, 3.5.2, 3.6.0 | build | 0 | 4 | ZOOKEEPER-2266 | It appears that the current implementation of 'generate-clover-reports' is broken. # It doesn't define the clover-report task # The clover-report element is missing the proper clover db reference |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago |
Reviewed
|
0|i2n0vj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2293 | Incorrect log level 'warn' is resulting in clutering of logs, Suggest to demote it to DEBUG from WARN |
Bug | Open | Minor | Unresolved | Rabi Kumar K C | Praveena Manvi | Praveena Manvi | 14/Oct/15 03:56 | 01/Feb/20 12:19 | 3.4.6 | server | 0 | 4 | 0 | 1800 | In https://svn.apache.org/repos/asf/zookeeper/trunk/src/java/main/org/apache/zookeeper/server/ZooKeeperServer.java If readOnly flag is not being sent it gets logged as warning. Since we have enabled warning, the server gets filled up with messages like Btw, readonly is optional and introduced later (http://wiki.apache.org/hadoop/ZooKeeper/GSoCReadOnlyMode), {code} 015-08-14T11:03:11+00:00 Connection request from old client /192.168.24.16:14479; will be dropped if server is in r-o mode ... 2015-08-14T11:21:56+00:00 Connection request from old client 2015-08-14T11:18:40+00:00 Connection request from old client /192.168.24.14:12135; will be dropped if server is in r-o mode 2015-08-14T11:19:40+00:00 Connection request from old client /192.168.24.14:12310; will be dropped if server is in r-o mode {code} we are just forced to send read-only flag which is optional to avoid wrong logging level chosen by zookeeper. {code} boolean readOnly = false; try { readOnly = bia.readBool("readOnly"); cnxn.isOldClient = false; } catch (IOException e) { // this is ok -- just a packet from an old client which // doesn't contain readOnly field LOG.warn("Connection request from old client " + cnxn.getRemoteSocketAddress() + "; will be dropped if server is in r-o mode"); } {code} Suggest to demote the same to DEBUG as its not intended to warn in anyway. |
100% | 100% | 1800 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 6 weeks, 5 days ago | 0|i2mzn3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2292 | Sign the download package |
Improvement | Resolved | Major | Duplicate | Chris Nauroth | Elias Levy | Elias Levy | 13/Oct/15 19:12 | 14/Oct/15 11:47 | 14/Oct/15 11:47 | build | 0 | 2 | ZOOKEEPER-2177 | Current ZK is made available for download as a compressed archive. Within the archive, there is a cryptographic signature for the ZK JAR file. Alas, the signature does not cover any of the other executable components that ZK depends on, such as JARs in the lib directory or the scripts in the bin directory. These could be tampered with. The whole download package should be signed and the signature made available along with it. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 23 weeks, 1 day ago | 0|i2mz5j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2291 | JMX log should not be printed while stopping ZK Server |
Bug | Open | Minor | Unresolved | Neha Bathra | Neha Bathra | Neha Bathra | 13/Oct/15 11:41 | 14/Oct/15 02:37 | 0 | 2 | While stopping zookeeper prints below message : "ZooKeeper JMX enabled by default ZooKeeper remote JMX Port set to 2022 ZooKeeper remote JMX authenticate set to false ZooKeeper remote JMX ssl set to false ZooKeeper remote JMX log4j set to true Stopping zookeeper ...STOPPED" JMX message should only be printed while starting ZK. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 23 weeks, 2 days ago | 0|i2myfb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2290 | Add read/write qps metrics in monitor cmd |
Improvement | Patch Available | Minor | Unresolved | Shaohui Liu | Shaohui Liu | Shaohui Liu | 11/Oct/15 23:40 | 14/Dec/19 06:07 | 3.4.6 | 3.7.0 | 0 | 4 | Read/write qps are important metrics to show the pressure of the cluster. We can also use it to alert about some abuse of zookeeper. | monitor | 9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 6 days ago | 0|i2mvcv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2289 | Support use of SASL "auth-int" and "auth-conf" quality of protection. |
New Feature | Open | Major | Unresolved | Unassigned | Chris Nauroth | Chris Nauroth | 10/Oct/15 18:50 | 16/Oct/15 18:46 | java client, server | 1 | 4 | The current codebase supports use of SASL for authenticating connections, but it does not allow specifying the desired SASL Quality of Protection (QOP). It always uses the default QOP, which is "auth" (authentication only). This issue proposes to support the full set of available QOP settings by adding support for "auth-int" (authentication and integrity) and "auth-conf" (authentication and integrity and privacy/encryption). | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 23 weeks, 4 days ago | 0|i2muvr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2288 | During shutdown, server may fail to ack completed transactions to clients. |
Bug | Patch Available | Major | Unresolved | Chris Nauroth | Chris Nauroth | Chris Nauroth | 10/Oct/15 18:16 | 10/Oct/15 18:55 | server | 0 | 2 | CURATOR-268 | During shutdown, requests may still be in flight in the request processing pipeline. Some of these requests have reached a state where the transaction has executed and committed, but has not yet been acknowledged back to the client. It's possible that these transactions will not ack to the client before the shutdown sequence completes. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 23 weeks, 5 days ago | 0|i2muvb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2287 | Audit logging the zookeeper operations |
New Feature | Resolved | Major | Duplicate | Mohammad Arshad | nijel | nijel | 30/Sep/15 07:17 | 29/Jun/16 15:01 | 29/Jun/16 15:01 | 0 | 3 | ZOOKEEPER-1260 | As of now Zookeeper does not support auditing the user operations This is a very important tracability in distributed cluster to trace the operations We can have a separate logger and log file. Can start with normal node change operations. Please share your thoughts ? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 25 weeks, 1 day ago | 0|i2mg7r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2286 | Doc Issue |
Improvement | Open | Minor | Unresolved | Unassigned | richard.zhao | richard.zhao | 29/Sep/15 03:22 | 29/Sep/15 03:22 | 3.4.6 | documentation | 0 | 1 | 60 | 60 | 0% | First, thanks for your detail documents. It's very helpful. In this one "Programming with ZooKeeper - A basic tutorial" (URL: https://zookeeper.apache.org/doc/trunk/zookeeperTutorial.html) The demo code has a small issue. --------------------------------------------------------------------------------- boolean enter() throws KeeperException, InterruptedException{ zk.create(root + "/" + name, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); while (true) { synchronized (mutex) { List<String> list = zk.getChildren(root, true); if (list.size() < size) { mutex.wait(); } else { return true; } } } } --------------------------------------------------------------------------------- The invocation of zk.create() should be under the if() branch like bellow --------------------------------------------------------------------------------- if (list.size() < size) { zk.create(root + "/" + name, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); mutex.wait(); } else { --------------------------------------------------------------------------------- The function leave() has a similar problem. And the invocation of zk.delete() should be as follows --------------------------------------------------------------------------------- zk.delete(root + "/" + list.get(0), 0); --------------------------------------------------------------------------------- Hope it can help other doc readers. |
0% | 0% | 60 | 60 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 25 weeks, 2 days ago | 0|i2me7r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2285 | QuorumTest's ignored test case causes wrong CI pre-commit feedback |
Bug | Patch Available | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 28/Sep/15 17:25 | 05/Feb/20 07:11 | 3.5.0 | 3.7.0, 3.5.8 | tests | 0 | 4 | # test case {{org.apache.zookeeper.test.QuorumTest.testSessionMove()}} is marked ignored by ZOOKEEPER-907 # Most of the CI pre-commit feedback is -1 because of above ignored test case. # Test case is locally passing The ignore tag should be removed from testSessionMove test case. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 33 weeks, 6 days ago | 0|i2mdnb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2284 | LogFormatter and SnapshotFormatter does not handle FileNotFoundException gracefully |
Bug | Closed | Minor | Fixed | maoling | Mohammad Arshad | Mohammad Arshad | 28/Sep/15 14:27 | 20/May/19 13:51 | 22/Feb/19 04:45 | 3.5.0 | 3.6.0, 3.5.5 | 0 | 6 | 0 | 12600 | {{LogFormatter}} and {{SnapshotFormatter}} does not handle FileNotFoundException gracefully. If file no exist then these classes propagate the exception to console. {code} Exception in thread "main" java.io.FileNotFoundException: log.1 (The system cannot find the file specified) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:146) at java.io.FileInputStream.<init>(FileInputStream.java:101) at org.apache.zookeeper.server.LogFormatter.main(LogFormatter.java:49) {code} File existence should be validated and appropriate message should be displayed on console if file does not exist |
100% | 100% | 12600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 1 year, 9 weeks, 3 days ago | 0|i2mdcv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2283 | traceFile property is not used in the ZooKeeper, it should be removed from documentation |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 28/Sep/15 13:47 | 09/May/19 14:59 | 14/Mar/16 03:00 | 3.4.8, 3.5.0 | 3.4.9, 3.5.2, 3.6.0 | documentation | 0 | 4 | ZOOKEEPER-3382 | zookeeperAdmin guide have following description for traceFile property {noformat} traceFile (Java system property: requestTraceFile) If this option is defined, requests will be will logged to a trace file named traceFile.year.month.day. Use of this option provides useful debugging information, but will impact performance. (Note: The system property has no zookeeper prefix, and the configuration variable name is different from the system property. Yes - it's not consistent, and it's annoying.) {noformat} But this property is used no where in the whole ZooKeeper code. it should be removed from documentation |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 1 week, 3 days ago |
Reviewed
|
0|i2md8v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2282 | chroot not stripped from path in asynchronous callbacks |
Bug | Closed | Critical | Fixed | Andrew Grasso | Andrew Grasso | Andrew Grasso | 28/Sep/15 13:19 | 14/Feb/20 10:23 | 27/Sep/19 09:43 | 3.4.6, 3.5.0 | 3.6.0, 3.5.7 | c client | 1 | 5 | 3600 | 0 | 4200 | 116% | ZOOKEEPER-1027 | Centos 6.6 | Callbacks passed to [zoo_acreate], [zoo_async], and [zoo_amulti] (for create ops) are called on paths that include the chroot. This is analagous to issue 1027, which fixed this bug for synchronous calls. I've created a patch to fix this in trunk |
100% | 100% | 4200 | 0 | 3600 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 24 weeks, 6 days ago | 0|i2md6n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2281 | ZK Server startup fails if there are spaces in the JAVA_HOME path |
Bug | Closed | Minor | Fixed | Neha Bathra | Neha Bathra | Neha Bathra | 23/Sep/15 10:43 | 21/Jul/16 16:18 | 09/Dec/15 18:27 | 3.4.8, 3.5.2, 3.6.0 | scripts | 1 | 8 | ZOOKEEPER-2351, ZOOKEEPER-2341 | Windows | Zookeeper startup fails if there are spaces in the %JAVA_HOME% variable. {code} if not exist %JAVA_HOME%\bin\java.exe ( echo Error: JAVA_HOME is incorrectly set. goto :eof ) set JAVA=%JAVA_HOME%\bin\java {code} |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 15 weeks, 1 day ago |
Reviewed
|
0|i2lgmn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2280 | NettyServerCnxnFactory doesn't honor maxClientCnxns param |
Bug | Open | Major | Unresolved | Unassigned | Edward Ribeiro | Edward Ribeiro | 21/Sep/15 21:13 | 05/Feb/20 07:16 | 3.4.6, 3.5.0, 3.5.1 | 3.7.0, 3.5.8 | server | 0 | 7 | 0 | 1800 | ZOOKEEPER-2739, ZOOKEEPER-2454, ZOOKEEPER-2238 | Even though NettyServerCnxnFactory has maxClientCnxns (default to 60) it doesn't enforce this limit in the code. | 100% | 100% | 1800 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 2 years, 51 weeks, 3 days ago | 0|i2ldjj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2279 | QuorumPeer loadDataBase() error message is incorrect |
Bug | Closed | Major | Fixed | Mohammad Arshad | sunhaitao | sunhaitao | 21/Sep/15 11:10 | 21/Jul/16 16:18 | 25/Sep/15 02:35 | 3.5.0, 3.5.1 | 3.4.7, 3.5.2, 3.6.0 | quorum | 0 | 5 | in loadDataBase() method, the below info is incorrect. if (acceptedEpoch < currentEpoch) { throw new IOException("The current epoch, " + ZxidUtils.zxidToString(currentEpoch) + " is less than the accepted epoch, " + ZxidUtils.zxidToString(acceptedEpoch)); } It should print: Change the message to ("The accepted epoch, " + ZxidUtils.zxidToString(acceptedEpoch) + " is less than the current epoch, " + ZxidUtils.zxidToString(currentEpoch) |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 25 weeks, 6 days ago | 0|i2lclb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2278 | 4lw command stmk throws NullPointerException |
Bug | Resolved | Major | Duplicate | Unassigned | Neha Bathra | Neha Bathra | 18/Sep/15 08:50 | 18/Sep/15 13:59 | 18/Sep/15 12:51 | 0 | 4 | ZOOKEEPER-2227 | run echo stmk 123 | netcat <hostname> <port> fails with NullPointerException | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 26 weeks, 6 days ago | 0|i2ke4n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2277 | Zookeeper off-line snapshot and transaction log viewer |
Improvement | Patch Available | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 16/Sep/15 12:29 | 05/Feb/20 07:12 | 3.7.0, 3.5.8 | 0 | 6 | {color:blue}Currently ZooKeeper provides utility java program to view the snapshot and transaction off-line but these are not easy to use, also the content is less understandable {color} In this improvement task I propose following functionality: 1) add zkTool.sh script to view snapshot and transaction. Usage: zkTool.sh -COMMAND <transaction/snapshot file> where COMMAND is one of: otv off-line transaction viewer, prints ZooKeeper transaction log in text format osv off-line snapshot viewer, prints ZooKeeper snapshot in text format 2) otv command will give output as: {noformat} 9/4/15 4:37:04 PM IST session 0x1004d19fe6f0002 cxid 0x00000000000c2c (epoch=0,count=3116) zxid 0x00000100000c49 (epoch=1,count=3145) create path="/4da53875-b471-4ab1-9995-03889e73c0a3/node246",data="Quick brown fox jumps over the lazy dog ",acl{e1{perms="cdrwa",id{scheme="world",id="anyone"}}},ephemeral="true",parentCVersion="8" {noformat} It is mostly same as {{org.apache.zookeeper.server.LogFormatter}} with some differences. * epoch and count are separated from zxid. * operations type will be written instead of code like createSession instead -10. * showing data. * permissions are written in letters perms="cdrwa" instead of perms="31" same as {{org.apache.zookeeper.cli.GetAclCommand}}. * ephemeral="true" instead of ephemeral="1" * etc. 3) osv command will give output as: {code} /67868d36-8bbf-4a8a-a076-f16810ac10de/node540000000010 cZxid = 0x00000100000265 (epoch=1,count=613) ctime = Fri Sep 04 16:35:58 IST 2015 mZxid = 0x00000100000265 (epoch=1,count=613) mtime = Fri Sep 04 16:35:58 IST 2015 pZxid = 0x00000100000265 (epoch=1,count=613) cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x1004d19fe6f0002 dataLength = 40 data = Quick brown fox jumps over the lazy dog {code} which is almost same as {{org.apache.zookeeper.server.SnapshotFormatter}} |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 1 year, 45 weeks ago | 0|i2k9br: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2276 | Multi operation failure does not include path in KeeperException |
Bug | Patch Available | Minor | Unresolved | Mohammad Arshad | Neha Bathra | Neha Bathra | 15/Sep/15 01:54 | 05/Feb/20 07:11 | 3.5.0 | 3.7.0, 3.5.8 | java client | 0 | 4 | Suse 11 SP3 | With normal create operation, the path of the failed node is displayed in KeeperException but this is not the case when create operation is through multi api |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 3 years, 25 weeks, 2 days ago | 0|i2k6jr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2275 | Fix RPM package creation on recent distros |
Bug | Resolved | Major | Won't Fix | Hannu Valtonen | Hannu Valtonen | Hannu Valtonen | 14/Sep/15 16:21 | 03/Mar/16 11:24 | 03/Mar/16 11:24 | 3.5.1 | build | 0 | 2 | ZOOKEEPER-2124, ZOOKEEPER-1604 | Three issues with RPM package building, The install stage was removing BUILDROOT content: [rpm] + rm -rf /tmp/zkpython_build_rpm/BUILD Since BUILD and BUILDROOT are actually the same folder, everything is removed before being used. The original fix for this problem was submitted by Cédric Lejeune http://mail-archives.apache.org/mod_mbox/zookeeper-user/201212.mbox/%3C50D2D481.8010507@pt-consulting.eu%3E The other two issues that need to be fixed are an invalid argument given to popd and a reference to old redhat RPM packaging scripts. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
4 years, 3 weeks ago | * Fix RPM creation on newer distributions | 0|i2k5sf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2274 | ZooKeeperServerMain is difficult to subclass for unit testing |
Improvement | Patch Available | Major | Unresolved | Jordan Zimmerman | Jordan Zimmerman | Jordan Zimmerman | 11/Sep/15 13:46 | 02/Mar/16 21:30 | 3.5.1 | server, tests | 0 | 1 | Apache Curator needs a testable version of ZooKeeperServerMain. In the past, Curator has used javassist, reflection, etc. but this is all clumsy. With a few trivial changes, Curator could use ZooKeeperServerMain directly by subclassing. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i2k2gv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2273 | Uninvited ZK joins the ensemble |
Bug | Open | Major | Unresolved | Unassigned | Benjamin Jaton | Benjamin Jaton | 10/Sep/15 14:45 | 16/Sep/15 19:14 | 0 | 2 | Scenario: - Install a Zookeeper on machine A - Install a Zookeeper on machine B, joining A to form an ensemble - Reinstall ZooKeeper on A (but with standaloneEnabled=false) -> B automatically joins A to form an ensemble again I think the work needed is discussed and addressed in ZOOKEEPER-832. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 27 weeks, 1 day ago | 0|i2k0i7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2272 | Code clean up in ZooKeeperServer and KerberosName |
Improvement | Patch Available | Trivial | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 10/Sep/15 02:36 | 05/Feb/20 07:11 | 3.5.0 | 3.7.0, 3.5.8 | server | 0 | 2 | # Following code in {{org.apache.zookeeper.server.ZooKeeperServer}} should be cleaned up. Some how it got missed in code review {code} if ((System.getProperty("zookeeper.allowSaslFailedClients") != null) && (System.getProperty("zookeeper.allowSaslFailedClients").equals("true"))) { {code} should be replaced with {code} if(Boolean.getBoolean("zookeeper.allowSaslFailedClients")) {code} # Similar code clean up can be done in {{org.apache.zookeeper.server.auth.KerberosName}} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 4 weeks, 1 day ago | 0|i2jzdz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2271 | ZOOKEEPER-2270 Allow MBeanRegistry to be overridden for better unit tests in 3.4.x |
Sub-task | Resolved | Major | Fixed | Jordan Zimmerman | Jordan Zimmerman | Jordan Zimmerman | 09/Sep/15 18:03 | 16/Oct/16 11:30 | 10/Sep/15 00:17 | 3.4.6 | 3.4.7 | server | 0 | 2 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 28 weeks ago | 0|i2jyu7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2270 | Allow MBeanRegistry to be overridden for better unit tests |
Improvement | Resolved | Major | Fixed | Jordan Zimmerman | Jordan Zimmerman | Jordan Zimmerman | 09/Sep/15 17:45 | 16/Oct/16 11:30 | 10/Sep/15 00:18 | 3.4.6, 3.5.1 | 3.5.2, 3.6.0 | server | 0 | 3 | ZOOKEEPER-2271 | Apache Curator currently must use byte code re-writing to prevent the MBeanRegistry from polluting the Platform MBeanServer. Provide a simple way to avoid this. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 28 weeks ago | 0|i2jysv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2269 | NullPointerException in RemotePeerBean |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 09/Sep/15 08:16 | 21/Jul/16 16:18 | 10/Sep/15 00:50 | 3.5.0 | 3.5.2, 3.6.0 | jmx | 0 | 4 | {code}org.apache.zookeeper.server.quorum.RemotePeerBean.getClientAddress(){code} throws NullPointerException when clientPort is not part of dynamic configuration. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 28 weeks ago | 0|i2jy0v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2268 | Zookeeper doc creation fails on windows |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 08/Sep/15 10:08 | 21/Jul/16 16:18 | 03/Oct/15 17:04 | 3.5.0 | 3.4.7, 3.5.2 | build | 0 | 4 | Zookeeper doc creation fails on windows with following error {code} D:\gitHome\zookeeper-trunk\build.xml:484: Execute failed: java.io.IOException: Cannot run program "C:\non-install\apache-forrest-0.9\bin\forrest" y "D:\gitHome\zookeeper-trunk\src\docs"): CreateProcess error=193, %1 is not a valid Win32 application {code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 24 weeks, 3 days ago |
Reviewed
|
0|i2jw27: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2267 | zoo_amulti doesn't correctly return ZINVALIDSTATE errors |
Improvement | Open | Minor | Unresolved | Unassigned | Mark Flickinger | Mark Flickinger | 08/Sep/15 02:28 | 28/Nov/16 15:18 | 3.4.0 | c client | 0 | 2 | ZOOKEEPER-2414 | zoo_amulti will always return ZMARSHALLINGERROR whenever the zhandle is in an unrecoverable state. ZINVALIDSTATE should probably be returned in these situations. Preferably the return code from the relevant error should be returned as is. At the very least it would be nice if zoo_amulti first checked is_unrecoverable, the way zoo_awget does. It seems the other async functions are implicitly doing this in the *Request_init calls. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 28 weeks, 2 days ago | 0|i2jvgv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2266 | Integrate JaCoCo Coverage Library |
Improvement | Patch Available | Minor | Unresolved | Akihiro Suda | Akihiro Suda | Akihiro Suda | 07/Sep/15 05:23 | 09/Mar/16 06:31 | tests | 0 | 3 | ZOOKEEPER-2294 | I would like to propose integration of [JaCoCo|http://www.eclemma.org/jacoco/] coverage library with ZooKeeper. h4. Purposes - To find poorly covered methods, and improve JUnit testcases to cover them - To estimate causes of flaky testcases (e.g. ZOOKEEPER-2080, ZOOKEEPER-2252, ZOOKEEPER-1868) by comparing reports from succeeded experiments and failed ones (I'm recently interested in how we can systematically realize this.) h4. Advantages of JaCoCo - Support recent JDKs (including JDK 8) - Low overhead - Released under EPL -- Note: cobertura has been removed from the code base because it is released under GPL ( ZOOKEEPER-75, http://www.apache.org/legal/resolved.html#category-x ) h4. Usage {panel} $ ant test #(plus optionally, -Dtestcase=.. -Dtest.method=..) $ ant jacoco-report $ x-www-browser build/test/jacoco/reports/index.html & {panel} Example: jacoco-report-example.zip h4. Possible Future Work - Integrate to Jenkins buildbot so that we can check coverage after each of builds |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 |
Patch
|
4 years, 2 weeks, 1 day ago | 0|i2jum7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2265 | zookeeper build fails while doing configuration for cppunit test |
Bug | Open | Minor | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 03/Sep/15 10:32 | 05/Feb/20 07:16 | 3.5.1 | 3.7.0, 3.5.8 | build | 0 | 2 | running {color:red}ant tar{color} gives following error {code} D:\gitHome\zookeeper-trunk\build.xml:1432: Execute failed: java.io.IOException: Cannot run program "autoreconf" (in directory "D:\gitHome\zookeeper-trunk\src\c"): {code} This error is purely environment error and it can be fixed by installing appropriate software package. But does it really required to configure the cpp unit as {color:red}ant tar{color} target flow does not run cppunit test cases. Then why to configure? There should be no cppunit configurations for {color:red}ant tar{color} target flow. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i2jr13: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2264 | Wrong error message when secureClientPortAddress is configured but secureClientPort is not configured |
Bug | Closed | Minor | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 31/Aug/15 09:22 | 21/Jul/16 16:18 | 06/Sep/15 14:06 | 3.5.2, 3.6.0 | server | 0 | 4 | # Wrong error message when secureClientPortAddress is configured but secureClientPort is not configured. zookeeper throws IllegalArgumentException with error message {{clientPortAddress is set but clientPort is not set}} but should be {{secureClientPortAddress is set but secureClientPort is not set}} # There is another problem with the same code. value is assigned to local variable but null check is done on instance variable so we will never get error message for this scenario. {code}if (this.secureClientPortAddress != null) {{code} should be replaced with {code}if (secureClientPortAddress != null) {{code} # Above problem is there for clientPort scenario also. So we should replace {code}if (this.clientPortAddress != null) {{code} with {code}if (clientPortAddress != null) {{code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 28 weeks, 4 days ago | 0|i2jlhr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2263 | ZooKeeper server should not start when neither clientPort no secureClientPort is configured |
Bug | Patch Available | Minor | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 28/Aug/15 11:14 | 05/Feb/20 07:11 | 3.7.0, 3.5.8 | 0 | 3 | ZooKeeper server should not start when neither clientPort no secureClientPort is configured. Without any client port ZooKeeper server can not server any purpose. It should simply return with proper error message |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 39 weeks, 2 days ago | 0|i2jj9b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2262 | Admin commands do not include secure client information |
Bug | Open | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 28/Aug/15 10:56 | 05/Feb/20 07:16 | 3.5.0 | 3.7.0, 3.5.8 | server | 0 | 1 | Admin commands do not include secure client information connections, configuration, connection_stat_reset and stats admin should include secure client informations 1) configuration should include the secure client port also 2) connections should include secure connections also 3) connection_stat_reset should also reset secure connection 4) stats command should accumulate both secure and non secure information |
JettyAdminServer | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 29 weeks, 6 days ago | 0|i2ji5z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2261 | When only secureClientPort is configured connections, configuration, connection_stat_reset, and stats admin commands throw NullPointerException |
Bug | Closed | Major | Fixed | Andor Molnar | Mohammad Arshad | Mohammad Arshad | 28/Aug/15 10:32 | 20/May/19 13:50 | 10/Sep/18 18:18 | 3.5.0 | 3.6.0, 3.5.5 | 0 | 4 | 0 | 16200 | When only secureClientPort is configured connections, configuration, connection_stat_reset and stats admin commands throw NullPointerException. Here is stack trace one of the connections command. {code} java.lang.NullPointerException at org.apache.zookeeper.server.admin.Commands$ConsCommand.run(Commands.java:177) at org.apache.zookeeper.server.admin.Commands.runCommand(Commands.java:92) at org.apache.zookeeper.server.admin.JettyAdminServer$CommandServlet.doGet(JettyAdminServer.java:166) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) {code} |
100% | 100% | 16200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 27 weeks, 2 days ago | 0|i2ji3z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2260 | Paginated getChildren call |
New Feature | Patch Available | Major | Unresolved | Marco P. | Marco P. | Marco P. | 27/Aug/15 20:29 | 05/Feb/20 07:11 | 3.4.6, 3.5.0 | 3.7.0, 3.5.8 | 4 | 24 | 0 | 600 | HBASE-14938, ZOOKEEPER-1162, ZOOKEEPER-282 | Add pagination support to the getChildren() call, allowing clients to iterate over children N at the time. Motivations for this include: - Getting out of a situation where so many children were created that listing them exceeded the network buffer sizes (making it impossible to recover by deleting)[1] - More efficient traversal of nodes with large number of children [2] I do have a patch (for 3.4.6) we've been using successfully for a while, but I suspect much more work is needed for this to be accepted. [1] https://issues.apache.org/jira/browse/ZOOKEEPER-272 [2] https://issues.apache.org/jira/browse/ZOOKEEPER-282 |
100% | 100% | 600 | 0 | api, features, pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 1 year, 17 weeks, 1 day ago | 0|i2jh5j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2259 | 4 letter word commands are slow because these unnecessarily go through Sasl authentication |
Bug | Open | Major | Unresolved | Unassigned | Mohammad Arshad | Mohammad Arshad | 27/Aug/15 10:46 | 28/Aug/15 02:29 | 0 | 2 | 4 letter word commands are slow because these commands unnecessarily go through Sasl authentication. {code} ZooKeeperSaslServer.<init>(Login) line: 48 NettyServerCnxn.<init>(Channel, ZooKeeperServer, NettyServerCnxnFactory) line: 88 NettyServerCnxnFactory$CnxnChannelHandler.channelConnected(ChannelHandlerContext, ChannelStateEvent) line: 89 NettyServerCnxnFactory$CnxnChannelHandler(SimpleChannelHandler).handleUpstream(ChannelHandlerContext, ChannelEvent) line: 118 DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext, ChannelEvent) line: 564 {code} as per the document 4lw commands are executed as bellow {{$ echo mntr | nc localhost 2185}} . Even without passing any authentication information it works fine. So 4lw command either should do authentication properly or it should not go through Sasl authentication flow. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 30 weeks ago | 0|i2jgan: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2258 | ZooKeeper bin scripts for the Windows needs to be updated |
Improvement | Open | Major | Unresolved | Unassigned | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 27/Aug/15 04:09 | 27/Aug/15 04:09 | scripts | 0 | 3 | The idea of this jira is to port all the changes done to the {{zkxx.sh}} unix based scripts to the Windows {{zkxx.cmd}} scripts. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 30 weeks ago | 0|i2jfr3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2257 | Make zookeeper server principal configurable at zookeeper client side |
Improvement | Resolved | Major | Duplicate | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 21/Aug/15 11:25 | 03/Jan/17 18:48 | 03/Sep/15 10:45 | 0 | 6 | ZOOKEEPER-1467 | Currently Zookeeper client expects zookeeper server's principal to be in the form of zookeeper.sasl.client.username/server-ip for example zookeeper/192.162.1.100. But this may not always be the case server principal can be some thing like zookeeper/hadoop.foo.com It would be better if we can make server principal configurable. Current Code: {code} String principalUserName = System.getProperty(ZK_SASL_CLIENT_USERNAME, "zookeeper"); zooKeeperSaslClient = new ZooKeeperSaslClient(principalUserName + "/" + addr.getHostString()); {code} Proposed Code: {code} String serverPrincipal = System.getProperty("zookeeper.server.principal"); if (null != serverPrincipal) { zooKeeperSaslClient = new ZooKeeperSaslClient(serverPrincipal); } else { String principalUserName = System.getProperty(ZK_SASL_CLIENT_USERNAME, "zookeeper"); zooKeeperSaslClient = new ZooKeeperSaslClient(principalUserName + "/" + addr.getHostString()); } {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 11 weeks, 2 days ago | 0|i2j7sf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2256 | Zookeeper is not using specified JMX port in zkEnv.sh |
Bug | Closed | Minor | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 20/Aug/15 08:57 | 21/Jul/16 16:18 | 06/Sep/15 13:51 | 3.5.0 | 3.4.7, 3.5.2, 3.6.0 | scripts | 0 | 4 | Zookeeper is not using specified JMX port. I put bellow entry in zkEnv.sh {{export JMXPORT=12345}} But zookeeper still uses random port for jmx. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 28 weeks, 4 days ago | 0|i2j5nj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2255 | Use static member classes when permitted |
Improvement | Patch Available | Minor | Unresolved | Yvonne Ironberg | Yvonne Ironberg | Yvonne Ironberg | 20/Aug/15 01:52 | 30/Jan/19 07:57 | 0 | 2 | 0 | 600 | Using static member classes saves time and space because an instances of a nonstatic member class has a reference to its enclosing instance. Also did some style improvements: - JLS recommends modifiers be in this order: public protected private abstract static final transient volatile synchronized native strictfp. - Inserted some spaces. |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i2j53r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2254 | CDH 5.4.4 HBASE 1.0.0 with PHX 4.5.0 |
Bug | Resolved | Major | Won't Fix | Unassigned | JUAN TORRES | JUAN TORRES | 19/Aug/15 18:32 | 20/Aug/15 07:47 | 20/Aug/15 07:47 | 0 | 1 | Hi, I need support. I have tried to troubleshoot this problem, but anything I do, is not working. I do not have a problem with the NN, ZK or any network limitation, but the phoenix still not able to connect the zookeeper servers. [root@svrdn174 bin]# /opt/phoenix/bin/sqlline.py svrzj001,svrzj002,svrzj003= :2181/hbase Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix:svrzj001,svrzj002,svrzj003:2181/hbase none n= one org.apache.phoenix.jdbc.PhoenixDriver Connecting to jdbc:phoenix:svrzj001,svrzj002,svrzj003:2181/hbase 15/08/19 18:15:30 WARN util.NativeCodeLoader: Unable to load native-hadoop = library for your platform... using builtin-java classes where applicable Error: ERROR 103 (08004): Unable to establish connection. (state=3D08004,co= de=3D103) java.sql.SQLException: ERROR 103 (08004): Unable to establish connection. at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newExcep= tion(SQLExceptionCode.java:388) at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQL= ExceptionInfo.java:145) at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnect= ion(ConnectionQueryServicesImpl.java:297) at org.apache.phoenix.query.ConnectionQueryServicesImpl.access$300(= ConnectionQueryServicesImpl.java:180) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(Con= nectionQueryServicesImpl.java:1901) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(Con= nectionQueryServicesImpl.java:1880) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixConte= xtExecutor.java:77) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(Connec= tionQueryServicesImpl.java:1880) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices= (PhoenixDriver.java:180) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmb= eddedDriver.java:132) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java= :151) at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java= :203) at sqlline.Commands.connect(Commands.java:1064) at sqlline.Commands.connect(Commands.java:996) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessor= Impl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethod= AccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandle= r.java:36) at sqlline.SqlLine.dispatch(SqlLine.java:804) at sqlline.SqlLine.initArgs(SqlLine.java:588) at sqlline.SqlLine.begin(SqlLine.java:656) at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292) Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException at org.apache.hadoop.hbase.client.ConnectionFactory.createConnectio= n(ConnectionFactory.java:240) at org.apache.hadoop.hbase.client.ConnectionManager.createConnectio= n(ConnectionManager.java:410) at org.apache.hadoop.hbase.client.ConnectionManager.createConnectio= nInternal(ConnectionManager.java:319) at org.apache.hadoop.hbase.client.HConnectionManager.createConnecti= on(HConnectionManager.java:144) at org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryIm= pl.createConnection(HConnectionFactory.java:47) at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnect= ion(ConnectionQueryServicesImpl.java:295) ... 22 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Me= thod) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeCons= tructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Delega= tingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnectio= n(ConnectionFactory.java:238) ... 27 more Caused by: java.lang.ExceptionInInitializerError at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode= (ZKClusterId.java:75) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(Zo= oKeeperRegistry.java:86) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.retrieveClusterId(ConnectionManager.java:833) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImpl= ementation.<init>(ConnectionManager.java:623) ... 32 more Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostExceptio= n: svrhdfscluster at org.apache.hadoop.security.SecurityUtil.buildTokenService(Securi= tyUtil.java:373) at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNode= Proxies.java:258) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxi= es.java:153) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547) = at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFi= leSystem.java:139) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java= :2591) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.jav= a:2625) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicCl= assLoader.java:104) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufU= til.java:229) ... 37 more Caused by: java.net.UnknownHostException: svrhdfscluster ... 51 more sqlline version 1.1.8 0: jdbc:phoenix:svrzj001,svrzj002,svrzj003:21> Thanks, |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 31 weeks ago | 0|i2j4iv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2253 | C asserts ordering of ping requests, while Java client does not |
Bug | Resolved | Major | Fixed | Chris Chen | Chris Chen | Chris Chen | 19/Aug/15 17:58 | 02/Mar/16 20:29 | 26/Sep/15 16:30 | 3.5.0 | c client | 0 | 4 | Affects C clients from 3.3 to trunk. The Java client does not enforce ordering on ping requests. It merely updates fields when a ping reply is received and schedules a new ping request when necessary. The C client actually enqueues the void response in the completion data structure and pulls it off when it gets a response. This sounds like an implementation detail (and it is, sort of), but if a future server were to, say, send unsolicited ping replies to a client to assert liveness, it would work fine against a Java client but would cause a C client to fail the assertion in zookeeper_process, "assert(cptr)", line 2912, zookeeper.c. |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 4 years, 25 weeks, 4 days ago | 0|i2j4hb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2252 | Random test case failure in org.apache.zookeeper.test.StaticHostProviderTest |
Bug | Closed | Minor | Fixed | Timothy James Ward | Mohammad Arshad | Mohammad Arshad | 19/Aug/15 16:09 | 21/Jul/16 16:18 | 10/Dec/15 15:44 | 3.5.0 | 3.5.2, 3.6.0 | 0 | 7 | ZOOKEEPER-2337 | Test {{org.apache.zookeeper.test.StaticHostProviderTest.testTwoInvalidHostAddresses()}} fails randomly. Refer bellow test ci buils: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2827/testReport/ https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2828/testReport/ https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2830/testReport/ |
9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 4 years, 15 weeks ago |
Reviewed
|
0|i2j4cn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2251 | Add Client side packet response timeout to avoid infinite wait. |
Bug | Closed | Critical | Fixed | Mohammad Arshad | nijel | nijel | 18/Aug/15 22:47 | 20/May/19 13:50 | 26/Jul/18 23:16 | 3.4.9, 3.5.2, 3.4.11 | 3.6.0, 3.5.5 | java client | 4 | 16 | 0 | 12600 | I came across one issue related to Client side packet response timeout In my cluster many packet drops happened for some time. One observation is the zookeeper client got hanged. As per the thread dump it is waiting for the response/ACK for the operation performed (synchronous API used here). I am using zookeeper.serverCnxnFactory=org.apache.zookeeper.server.NIOServerCnxnFactory Since only few packets missed there is no DISCONNECTED event occurred. Need add a "response time out" for the operations or packets. *Comments from [~rakeshr]* My observation about the problem:- * Can use tools like 'Wireshark' to simulate the artificial packet loss. * Assume there is only one packet in the 'outgoingQueue' and unfortunately the server response packet lost. Now, client will enter into infinite waiting. https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/ClientCnxn.java#L1515 * Probably we can discuss more about this problem and possible solutions(add packet ACK timeout or another better approach) in the jira. |
100% | 100% | 12600 | 0 | fault, pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 1 year, 33 weeks, 6 days ago | 0|i2j31z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2250 | Support client connections using a SOCKS proxy |
New Feature | Open | Major | Unresolved | Unassigned | David Phillips | David Phillips | 17/Aug/15 19:50 | 07/Feb/17 11:19 | java client | 3 | 5 | Connecting to ZooKeeper via a SOCKS proxy is often useful for debugging systems over an SSH dynamic port forward. It is possible to do this today with some hacking by setting "zookeeper.clientCnxnSocket", but that is difficult because ClientCnxnSocket is package-private and is quite low-level. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 6 weeks, 2 days ago | 0|i2j153: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2249 | CRC check failed when preAllocSize smaller than node data |
Bug | Resolved | Major | Fixed | Abraham Fine | Benjamin Jaton | Benjamin Jaton | 17/Aug/15 12:24 | 08/Mar/18 12:40 | 18/Jan/18 18:48 | 3.5.3, 3.4.11, 3.6.0 | 3.5.4, 3.6.0, 3.4.12 | 0 | 5 | ZOOKEEPER-2994 | Unexpected exception, exiting abnormally java.io.IOException: CRC check failed org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:612) org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:157) org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:272) org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:399) To reproduce, set the preAllocSize to 8MB, the jute.maxbuffer to 20MB and try saving a 15MB node several times. In my case the erroneous CRC appears after the second save. I use the LogFormatter class to detect it. I suspect that the CRC error happens when the new transaction log is created, the code probably expects to have enough room to save the transaction when creating a new file, but it's too small. |
server | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 9 weeks ago | 0|i2j0ev: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2248 | log.zxid file is very large size. It's likely to lead to zk disk space is not enough. |
Improvement | Open | Major | Unresolved | Unassigned | Yongcheng Liu | Yongcheng Liu | 15/Aug/15 03:54 | 15/Aug/15 17:20 | 0 | 3 | 1. about logCount(the count of log entries) 2. it's local variable in SyncRequestProcessor, it will be reset 0, when SyncRequestProcessor thread exit for LOOKING. SyncRequestProcessor thread will exit but zk process is ok, when follower followWithLeader and connecting is broken. 3. it will lead to log.zxid file keep expanding 64M |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 31 weeks, 5 days ago | 0|i2iyj3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2247 | Zookeeper service becomes unavailable when leader fails to write transaction log |
Bug | Closed | Critical | Fixed | Rakesh Radhakrishnan | Mohammad Arshad | Mohammad Arshad | 14/Aug/15 09:13 | 30/Jan/19 09:31 | 13/Aug/16 09:59 | 3.5.0 | 3.4.9, 3.5.3, 3.6.0 | 0 | 12 | 0 | 600 | ZOOKEEPER-2529, ZOOKEEPER-2452, ZOOKEEPER-1907 | Zookeeper service becomes unavailable when leader fails to write transaction log. Bellow are the exceptions {code} 2015-08-14 15:41:18,556 [myid:100] - ERROR [SyncThread:100:ZooKeeperCriticalThread@48] - Severe unrecoverable error, from thread : SyncThread:100 java.io.IOException: Input/output error at sun.nio.ch.FileDispatcherImpl.force0(Native Method) at sun.nio.ch.FileDispatcherImpl.force(FileDispatcherImpl.java:76) at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:376) at org.apache.zookeeper.server.persistence.FileTxnLog.commit(FileTxnLog.java:331) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.commit(FileTxnSnapLog.java:380) at org.apache.zookeeper.server.ZKDatabase.commit(ZKDatabase.java:563) at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:178) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:113) 2015-08-14 15:41:18,559 [myid:100] - INFO [SyncThread:100:ZooKeeperServer$ZooKeeperServerListenerImpl@500] - Thread SyncThread:100 exits, error code 1 2015-08-14 15:41:18,559 [myid:100] - INFO [SyncThread:100:ZooKeeperServer@523] - shutting down 2015-08-14 15:41:18,560 [myid:100] - INFO [SyncThread:100:SessionTrackerImpl@232] - Shutting down 2015-08-14 15:41:18,560 [myid:100] - INFO [SyncThread:100:LeaderRequestProcessor@77] - Shutting down 2015-08-14 15:41:18,560 [myid:100] - INFO [SyncThread:100:PrepRequestProcessor@1035] - Shutting down 2015-08-14 15:41:18,560 [myid:100] - INFO [SyncThread:100:ProposalRequestProcessor@88] - Shutting down 2015-08-14 15:41:18,561 [myid:100] - INFO [SyncThread:100:CommitProcessor@356] - Shutting down 2015-08-14 15:41:18,561 [myid:100] - INFO [CommitProcessor:100:CommitProcessor@191] - CommitProcessor exited loop! 2015-08-14 15:41:18,562 [myid:100] - INFO [SyncThread:100:Leader$ToBeAppliedRequestProcessor@915] - Shutting down 2015-08-14 15:41:18,562 [myid:100] - INFO [SyncThread:100:FinalRequestProcessor@646] - shutdown of request processor complete 2015-08-14 15:41:18,562 [myid:100] - INFO [SyncThread:100:SyncRequestProcessor@191] - Shutting down 2015-08-14 15:41:18,563 [myid:100] - INFO [ProcessThread(sid:100 cport:-1)::PrepRequestProcessor@159] - PrepRequestProcessor exited loop! {code} After this exception Leader server still remains leader. After this non recoverable exception the leader should go down and let other followers become leader. |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 25 | 9223372036854775807 | 2 years, 36 weeks, 5 days ago | 0|i2ixev: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2246 | quorum connection manager takes a long time to shut down |
Bug | Open | Major | Unresolved | Michael Han | Michi Mutsuzaki | Michi Mutsuzaki | 12/Aug/15 22:44 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | quorum | 0 | 8 | ZOOKEEPER-2080 | Receive worker can take a long time to shut down because the socket timeout is set to zero: http://s.apache.org/TfI There was a discussion on the mailing list a while back: http://s.apache.org/cYG |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 27 weeks, 1 day ago | 0|i2iuy7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2245 | SimpleSysTest test cases fails |
Bug | Closed | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 11/Aug/15 09:48 | 21/Jul/16 16:18 | 17/Sep/15 03:14 | 3.5.0 | 3.4.7, 3.5.2, 3.6.0 | 0 | 5 | ZOOKEEPER-2244 | When {{org.apache.zookeeper.test.system.SimpleSysTest}} is run for in-memory Zookeeper Servers, by specifying baseSysTest.fakeMachines=yes, it fails. Its displays following errors 1:{code} java.io.IOException: org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Address unresolved: 127.0.0.1:participant at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:474) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1077) Caused by: org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Address unresolved: 127.0.0.1:participant at org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer.<init>(QuorumPeer.java:221) {code} 2: {code} java.lang.NullPointerException at org.apache.zookeeper.test.system.BaseSysTest.tearDown(BaseSysTest.java:66) {code} |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 27 weeks ago | 0|i2il73: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2244 | On Windows zookeeper fails to restart |
Bug | Closed | Critical | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 11/Aug/15 07:26 | 21/Jul/16 16:18 | 28/Sep/15 22:12 | 3.5.0 | 3.5.2, 3.6.0 | 0 | 7 | ZOOKEEPER-2245 | This issue occurs in following scenario 1) configure server properties in zookeeper configuration file(zoo.cfg) example: {code} server.1=localhost:43222:43225:participant;0.0.0.0:43228 server.2=localhost:43223:43226:participant;0.0.0.0:43229 server.3=localhost:43224:43227:participant;0.0.0.0:43230 {code} 2) start the servers on windows. All the servers started successfully 3) stop any of the server 4) try to start the stopped server. It fails with following error {code} org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing D:SystemTestCasesZKServer1confzoo.cfg.dynamic.100000000 {code} |
9223372036854775807 | No Perforce job exists for this issue. | 8 | 9223372036854775807 | 4 years, 24 weeks, 3 days ago | 0|i2il07: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2243 | Supported platforms is completely out of date |
Bug | Closed | Major | Fixed | Chris Nauroth | Ivan Kelly | Ivan Kelly | 11/Aug/15 04:16 | 21/Jul/16 16:18 | 08/Feb/16 16:33 | 3.4.9, 3.5.2, 3.6.0 | 0 | 8 | ZOOKEEPER-1996, INFRA-10116 | http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html#sc_supportedPlatforms It refers to Solaris as Sun Solaris so it's at least 5 years out of date. We should "support" the platforms that we are running zookeeper on regularly, so I suggest paring it down to linux and windows (mac os doesn't really count because people don't run it on servers anymore). Everything else should be "may work, not supported, but will fix obvious bugs". |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 6 weeks, 3 days ago |
Reviewed
|
0|i2ikqn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2242 | ZooKeeper OSGi bundle is missing package import for org.ietf.jgss. |
Bug | Resolved | Major | Duplicate | Unassigned | Chris Nauroth | Chris Nauroth | 06/Aug/15 13:41 | 21/Aug/15 12:27 | 21/Aug/15 12:27 | build | 0 | 1 | ZOOKEEPER-2056 | The ZooKeeper build injects OSGi headers into the manifest, but the {{Import-Package}} header does not include {{org.ietf.jgss}}, which is used by the ZooKeeper code. For applications using ZooKeeper inside an OSGi container, this can cause {{ClassNotFoundException}} unless the application adds the missing import to its own OSGi bundle. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 30 weeks, 6 days ago | 0|i2ifi7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2241 | In Login.java, a tgt with a small expiry time will break the code |
Bug | Open | Minor | Unresolved | Unassigned | Reza Farivar | Reza Farivar | 05/Aug/15 16:18 | 05/Aug/15 16:18 | 0 | 1 | In the Login.java code, if a TGT with a small expiration date (e.g. 5 minutes) is passed in, the refresh date is set at a value less than the MIN_TIME_BEFORE_RELOGIN, which is a minute by default. As a result, the condition in line 153 evaluates to true, setting nextRefresh to now. Then right after, in line 176, it checks the nextRefresh againt now, and will jump to line 186 and just exit (without throwing an exception), exiting the refresh thread. https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/Login.java#L186 Possible Solution: changing line 176 to if (now <= nextRefresh) |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 33 weeks, 1 day ago | 0|i2idvj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2240 | Make the three-node minimum more explicit in documentation and on website |
Improvement | Closed | Trivial | Fixed | Mohammad Arshad | Shawn Heisey | Shawn Heisey | 05/Aug/15 00:09 | 21/Jul/16 16:18 | 20/Mar/16 00:31 | 3.4.9, 3.5.2, 3.6.0 | documentation | 0 | 6 | One of the most important parts of a production zookeeper deployment is the three-node minimum requirement for fault tolerance ... but when I glance at the website and the documentation, this requirement is difficult to actually find. It is buried deep in the admin documentation, in a sentence that says "Thus, a deployment that consists of three machines can handle one failure, and a deployment of five machines can handle two failures." Other parts of the documentation hint at it, but nothing that I've seen comes out and explicitly says it. Ideally, documentation about this requirement would be in a location where it can be easily pinpointed with a targeted URL, so I can point to ZK documentation with a link and clearly tell SolrCloud users that this is a real requirement. If someone can point me to version control locations where I can check out or clone the docs and the website, I'm happy to attempt a patch. |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 4 days ago |
Reviewed
|
0|i2ich3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2239 | JMX State from LocalPeerBean incorrect |
Bug | Closed | Major | Fixed | Kevin Lee | Kevin Lee | Kevin Lee | 04/Aug/15 12:33 | 21/Jul/16 16:18 | 26/Oct/15 03:15 | 3.4.6, 3.5.0, 3.5.1 | 3.4.7, 3.5.2, 3.6.0 | jmx | 0 | 6 | All | The "State" property of LocalPeerBean in package org.apache.zookeeper.server.quorum is returning the incorrect value. It is performing peer.getState() which is calling the getState() method on java.lang.Thread instead of getting the server state from org.apache.zookeeper.server.quorum.QuorumPeer. The Javadoc within LocalPeerMXBean.java states that it should be returning the server state as well. The fix is to call peer.getServerState() in the getState() method of LocalPeerBean instead of peer.getState().toString(). This will return the states defined in QuorumStats.Provider (unknown, leaderelection, leading, following, and observing). This issue prevents JMX monitoring of the Zookeeper server state. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 21 weeks, 3 days ago | 0|i2ibkv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2238 | Support limiting the maximum number of connections/clients to a zookeeper server. |
Improvement | Resolved | Major | Fixed | Sujith Simon | nijel | nijel | 04/Aug/15 01:08 | 13/Nov/19 23:54 | 13/Nov/19 21:55 | 3.6.0 | 2 | 10 | 0 | 14400 | ZOOKEEPER-2280 | Currently zookeeper have the feature of limiting the maximum number of connection/client per IP or Host (maxClientCnxns). But to safe guard zookeeper server from DoS attack due to many clients from different IPs, it is better to have a limit of total number of connections/clients to a a single member of the ZooKeeper ensemble as well. So the idea is to introduce a new configuration to limit the maximum number of total connections/clients. Please share your thoughts. |
100% | 100% | 14400 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 7 | 9223372036854775807 | 18 weeks ago | 0|i2iasv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2237 | ZOOKEEPER-1572 Port async multi to 3.4 branch |
Sub-task | Resolved | Major | Fixed | Ivan Kelly | Ivan Kelly | Ivan Kelly | 30/Jul/15 04:26 | 31/Jul/15 23:46 | 31/Jul/15 23:46 | 3.4.7 | java client | 0 | 3 | Async multi is available in 3.5 branch, but this is currently alpha, and doesn't look like it'll be GA in the next 6 months. I've run into a few cases where async multi would be really useful, but we want to stick with a GA client. Thus, I'm going to backport Sijie's patch to 3.4. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 33 weeks, 5 days ago | 0|i2i42v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2236 | Zookeeper truncates file to 0bytes |
Bug | Open | Major | Unresolved | Unassigned | Bharat Singh | Bharat Singh | 23/Jul/15 11:46 | 12/Dec/16 03:36 | 3.4.6 | contrib-zkfuse, server, tests | 0 | 3 | Ubuntu14.04, Standalone Zookeeper server, Zkfuse | I am facing a rename issue with Zkfuse. I am trying to test file atomic updates. After some iterations the file size becomes 0. This is easily reproducible, just running the below script for ~5mins. Setup: zookeeper-3.4.6 with Zkfuse mounted, size of testFile = 1k while [ 1 ] do cp /root/testFile /mnt/zk/testFile.tmp mv /mnt/zk/testFile.tmp /mnt/zk/testFile ls -larth /mnt/zk/ sleep 1 done Zkfuse debug logs doesn't show any suspicious activity. Looks like zookeeper/zkfuse RENAME is not atomic. But code browsing and log messages show that update have issues: 1) update is not able to pull data from zookeeper due to the _refCnt > 1, so rename get an empty ZkfuseFile object. 2) I also hit an assert in update, assert(newFile == false || _isOnlyRegOpen()); Now I have suspicion on the refcount logic. Have any one faced similar issues or have used Zkfuse in production environment. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 |
Important
|
4 years, 35 weeks ago | 0|i2hujj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2235 | License update |
Bug | Closed | Blocker | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 17/Jul/15 12:30 | 21/Jul/16 16:18 | 03/May/16 14:30 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.5.2, 3.6.0 | 0 | 5 | Updating license files and notice.txt as needed. Here is a list of the jars we are currently bundling with the release artifact with the corresponding license: # commons-cli-1.2.jar -- ASF # javacc.jar -- BSD license # jline-2.11.jar -- BSD license # servlet-api-2.5-20081211.jar - CDDL # jackson-core-asl-1.9.11.jar -- ALv2 # jetty-6.1.26.jar -- ALv2 # log4j-1.2.16.jar -- ALv2 # jackson-mapper-asl-1.9.11.jar -- ALv2 # jetty-util-6.1.26.jar -- ALv2 # netty-3.7.0.Final.jar -- ALv2 # slf4j-log4j12-1.7.5.jar -- MIT |
9223372036854775807 | No Perforce job exists for this issue. | 9 | 9223372036854775807 | 3 years, 46 weeks, 2 days ago |
Reviewed
|
0|i2he5z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2234 | Snapshot serialization race condition can lead to partial transaction and inoperable data node |
Bug | Open | Minor | Unresolved | Unassigned | Adam Milne-Smith | Adam Milne-Smith | 16/Jul/15 06:30 | 16/Jul/15 06:30 | 3.4.6 | 0 | 2 | This issue can be reproduced by creating a node with a new ACL during data tree serialization after ACL cache serialization. When restoring from this snapshot without the tranlog, the state will include a node with no corresponding ACL in the ACL cache. This node will then be impossible to operate on as it will cause a MarshallingError. If the tranlog is played over a server in this erroneous state, it does appear to correct itself. This bug means that to reliably restore from a snapshot, you must also have backed up the subsequent tranlog covering at least the transactions that were partially written to the snapshot. Issue first described here: http://mail-archives.apache.org/mod_mbox/zookeeper-user/201507.mbox/%3C0LzCmv-1YtgSd0Dqb-014Qqf@mrelayeu.kundenserver.de%3E It also appears possible for a snapshot to be missing a session yet contain an ephemeral node created by that session; fortunately ZooKeeperServer.loadData() should clean these ephemerals up. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 36 weeks ago | 0|i2hbw7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2233 | Invalid description in the comment of LearnerHandler.syncFollower() |
Improvement | Open | Trivial | Unresolved | Hitoshi Mitake | Hitoshi Mitake | Hitoshi Mitake | 14/Jul/15 02:51 | 14/Jul/15 14:05 | 0 | 3 | LearnerHandler.syncFollower() has a comment like below: When leader election is completed, the leader will set its lastProcessedZxid to be (epoch < 32). There will be no txn associated with this zxid. However, IIUC, the expression "epoch < 32" (comparison) should be "epoch << 32" (bitshift). Of course the error is very trivial but it was a little bit confusing for me, so I'd like to fix it. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
4 years, 36 weeks, 2 days ago | 0|i2h7yn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2232 | zkperl is out of sync with CPAN releases |
Improvement | Open | Minor | Unresolved | Unassigned | Mark Flickinger | Mark Flickinger | 11/Jul/15 22:16 | 10/Mar/16 18:42 | contrib-bindings | 0 | 2 | First some background here: Around 6 months ago I had some bug fixes I wanted patched in zkperl, which I had been using through [CPAN|https://metacpan.org/pod/Net::ZooKeeper]. I volunteered to take over maintenance of the code, and Chris Darroch set me up with permissions to push new releases to CPAN. At the time I was unaware that Net::ZooKeeper was bundled with the rest of the ZooKeeper code, and hadn't reached out to any other ZooKeeper devs. Since that time there have been a few CPAN releases, which included some feature requests from users([#1|https://github.com/mark-5/p5-net-zookeeper/pull/1], [#2|https://github.com/mark-5/p5-net-zookeeper/pull/2]), bug fixes([#3|https://github.com/mark-5/p5-net-zookeeper/pull/3], [#5|https://github.com/mark-5/p5-net-zookeeper/issues/5]) and test failure fixes. I'd love for zkperl to be kept in sync with CPAN releases, and to have other ZooKeeper devs approve feature requests from users. Is there any preferred process for keeping these distributions in sync? I don't currently have any specific ideas. I mainly wanted to start the conversation here, with other devs. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 2 weeks ago | 0|i2h567: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2231 | ServerSocket opened by ZooKeeperServer cannot use SO_REUSEADDR under Linux |
Bug | Open | Major | Unresolved | Unassigned | Gábor Lipták | Gábor Lipták | 03/Jul/15 03:55 | 10/Jul/15 11:59 | 3.4.6 | server | 0 | 3 | $ uname -a Linux 3.19.0-20-generic #20-Ubuntu SMP Fri May 29 10:10:47 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux $ java -version java version "1.7.0_60" Java(TM) SE Runtime Environment (build 1.7.0_60-b19) Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode) |
I think reporting [this stackoverflow question|http://stackoverflow.com/q/31163513/337621] to the ZooKeeper team is important. org.apache.zookeeper.server.NIOServerCnxnFactory.configure(InetSocketAddress, int) has the following code: {code:java} @Override public void configure(InetSocketAddress addr, int maxcc) throws IOException { configureSaslLogin(); thread = new Thread(this, "NIOServerCxn.Factory:" + addr); thread.setDaemon(true); maxClientCnxns = maxcc; this.ss = ServerSocketChannel.open(); ss.socket().setReuseAddress(true); LOG.info("binding to port " + addr); ss.socket().bind(addr); ss.configureBlocking(false); ss.register(selector, SelectionKey.OP_ACCEPT); } {code} So the intention is to use SO_REUSEADDR. This does not work under linux (at least with the java version I use). The reason is that sun.nio.ch.ServerSocketChannelImpl.setOption(SocketOption<T>, T) used by ZooKeeper has this code: {code:java} public <T> ServerSocketChannel setOption(SocketOption<T> paramSocketOption, T paramT) throws IOException { if (paramSocketOption == null) throw new NullPointerException(); if (!(supportedOptions().contains(paramSocketOption))) throw new UnsupportedOperationException("'" + paramSocketOption + "' not supported"); synchronized (this.stateLock) { if (!(isOpen())) throw new ClosedChannelException(); if ((paramSocketOption == StandardSocketOptions.SO_REUSEADDR) && (Net.useExclusiveBind())) { this.isReuseAddress = ((Boolean)paramT).booleanValue(); } else { Net.setSocketOption(this.fd, Net.UNSPEC, paramSocketOption, paramT); } return this; } } {code} "Net.useExclusiveBind()" seems to give back always false under linux no matter what value is set for [sun.net.useExclusiveBind|http://www.oracle.com/technetwork/java/javase/7u25-relnotes-1955741.html#napi-win] environment entry. If someone wants to stop and start an embedded ZooKeeper server, it can result in BindExceptions. If there would be some workaround under Linux, it would be really good. Also under windows the sun.net.useExclusiveBind env entry seems to be important to have the SO_REUSEADDR option. Maybe it would worth to document this network setting. I have a [test code|http://pastebin.com/Hhyfiz3Y] which can reproduce the BindException under Linux. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 36 weeks, 6 days ago | 0|i2gtq7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2230 | Connections fo ZooKeeper server becomes slow over time with native GSSAPI |
Bug | Patch Available | Major | Unresolved | Enis Soztutar | Deepesh Reja | Deepesh Reja | 02/Jul/15 02:13 | 02/Jan/20 05:45 | 3.4.6, 3.4.7, 3.4.8, 3.5.0 | 3.4.6, 3.4.7, 3.4.8, 3.5.2 | server | 1 | 13 | 0 | 1800 | ZOOKEEPER-2670 | OS: RHEL6 Java: 1.8.0_40 Configuration: java.env: {noformat} SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Xmx5120m" SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Djava.security.auth.login.config=/local/apps/zookeeper-test1/conf/jaas-server.conf" SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Dsun.security.jgss.native=true" {noformat} jaas-server.conf: {noformat} Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true isInitiator=false principal="zookeeper/<hostname>@<REALM>"; }; {noformat} Process environment: {noformat} KRB5_KTNAME=/local/apps/zookeeper-test1/conf/keytab ZOO_LOG_DIR=/local/apps/zookeeper-test1/log ZOOCFGDIR=/local/apps/zookeeper-test1/conf {noformat} |
ZooKeeper server becomes slow over time when native GSSAPI is used. The connection to the server starts taking upto 10 seconds. This is happening with ZooKeeper-3.4.6 and is fairly reproducible. Debug logs: {noformat} 2015-07-02 00:58:49,318 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:NIOServerCnxnFactory@197] - Accepted socket connection from /<client_ip>:47942 2015-07-02 00:58:49,318 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@78] - serviceHostname is '<zookeeper-server>' 2015-07-02 00:58:49,318 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@79] - servicePrincipalName is 'zookeeper' 2015-07-02 00:58:49,318 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@80] - SASL mechanism(mech) is 'GSSAPI' 2015-07-02 00:58:49,324 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperSaslServer@106] - Added private credential to subject: [GSSCredential: zookeeper@<zookeeper-server> 1.2.840.113554.1.2.2 Accept [class sun.security.jgss.wrapper.GSSCredElement]] 2015-07-02 00:58:59,441 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@810] - Session establishment request from client /<client_ip>:47942 client's lastZxid is 0x0 2015-07-02 00:58:59,441 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@868] - Client attempting to establish new session at /<client_ip>:47942 2015-07-02 00:58:59,448 [myid:] - DEBUG [SyncThread:0:FinalRequestProcessor@88] - Processing request:: sessionid:0x14e486028785c81 type:createSession cxid:0x0 zxid:0x110e79 txntype:-10 reqpath:n/a 2015-07-02 00:58:59,448 [myid:] - DEBUG [SyncThread:0:FinalRequestProcessor@160] - sessionid:0x14e486028785c81 type:createSession cxid:0x0 zxid:0x110e79 txntype:-10 reqpath:n/a 2015-07-02 00:58:59,448 [myid:] - INFO [SyncThread:0:ZooKeeperServer@617] - Established session 0x14e486028785c81 with negotiated timeout 10000 for client /<client_ip>:47942 2015-07-02 00:58:59,452 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@949] - Responding to client SASL token. 2015-07-02 00:58:59,452 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@953] - Size of client SASL token: 706 2015-07-02 00:58:59,460 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@984] - Size of server SASL response: 161 2015-07-02 00:58:59,462 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@949] - Responding to client SASL token. 2015-07-02 00:58:59,462 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@953] - Size of client SASL token: 0 2015-07-02 00:58:59,462 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@984] - Size of server SASL response: 32 2015-07-02 00:58:59,463 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@949] - Responding to client SASL token. 2015-07-02 00:58:59,463 [myid:] - DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@953] - Size of client SASL token: 32 2015-07-02 00:58:59,464 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:SaslServerCallbackHandler@118] - Successfully authenticated client: authenticationID=<user_principal>; authorizationID=<user_principal>. 2015-07-02 00:58:59,464 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:ZooKeeperServer@964] - adding SASL authorization for authorizationID: <user_principal> 2015-07-02 00:58:59,465 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14e486028785c81 2015-07-02 00:58:59,467 [myid:] - DEBUG [SyncThread:0:FinalRequestProcessor@88] - Processing request:: sessionid:0x14e486028785c81 type:closeSession cxid:0x1 zxid:0x110e7a txntype:-11 reqpath:n/a 2015-07-02 00:58:59,467 [myid:] - DEBUG [SyncThread:0:FinalRequestProcessor@160] - sessionid:0x14e486028785c81 type:closeSession cxid:0x1 zxid:0x110e7a txntype:-11 reqpath:n/a 2015-07-02 00:58:59,467 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:42405:NIOServerCnxn@1007] - Closed socket connection for client /<client_ip>:47942 which had sessionid 0x14e486028785c81 {noformat} If you see, after adding the credentials to privateCredential set, it takes roughly 10 seconds to reach to session establishment request. From the code it looks like Subject.doAs() is taking a lot of time. I connected it to jdb while it was waiting and got following stacktrace: {noformat} NIOServerCxn.Factory:0.0.0.0/0.0.0.0:58909: [1] java.util.HashMap$TreeNode.find (HashMap.java:1,865) [2] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [3] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [4] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [5] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [6] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [7] java.util.HashMap$TreeNode.find (HashMap.java:1,861) [8] java.util.HashMap$TreeNode.putTreeVal (HashMap.java:1,981) [9] java.util.HashMap.putVal (HashMap.java:637) [10] java.util.HashMap.put (HashMap.java:611) [11] java.util.HashSet.add (HashSet.java:219) [12] javax.security.auth.Subject$ClassSet.populateSet (Subject.java:1,418) [13] javax.security.auth.Subject$ClassSet.<init> (Subject.java:1,372) [14] javax.security.auth.Subject.getPrivateCredentials (Subject.java:767) [15] sun.security.jgss.GSSUtil$1.run (GSSUtil.java:340) [16] sun.security.jgss.GSSUtil$1.run (GSSUtil.java:332) [17] java.security.AccessController.doPrivileged (native method) [18] sun.security.jgss.GSSUtil.searchSubject (GSSUtil.java:332) [19] sun.security.jgss.wrapper.NativeGSSFactory.getCredFromSubject (NativeGSSFactory.java:53) [20] sun.security.jgss.wrapper.NativeGSSFactory.getCredentialElement (NativeGSSFactory.java:116) [21] sun.security.jgss.GSSManagerImpl.getCredentialElement (GSSManagerImpl.java:193) [22] sun.security.jgss.GSSCredentialImpl.add (GSSCredentialImpl.java:427) [23] sun.security.jgss.GSSCredentialImpl.<init> (GSSCredentialImpl.java:62) [24] sun.security.jgss.GSSManagerImpl.createCredential (GSSManagerImpl.java:154) [25] com.sun.security.sasl.gsskerb.GssKrb5Server.<init> (GssKrb5Server.java:108) [26] com.sun.security.sasl.gsskerb.FactoryImpl.createSaslServer (FactoryImpl.java:85) [27] javax.security.sasl.Sasl.createSaslServer (Sasl.java:524) [28] org.apache.zookeeper.server.ZooKeeperSaslServer$1.run (ZooKeeperSaslServer.java:118) [29] org.apache.zookeeper.server.ZooKeeperSaslServer$1.run (ZooKeeperSaslServer.java:114) [30] java.security.AccessController.doPrivileged (native method) [31] javax.security.auth.Subject.doAs (Subject.java:422) [32] org.apache.zookeeper.server.ZooKeeperSaslServer.createSaslServer (ZooKeeperSaslServer.java:114) [33] org.apache.zookeeper.server.ZooKeeperSaslServer.<init> (ZooKeeperSaslServer.java:48) [34] org.apache.zookeeper.server.NIOServerCnxn.<init> (NIOServerCnxn.java:100) [35] org.apache.zookeeper.server.NIOServerCnxnFactory.createConnection (NIOServerCnxnFactory.java:161) [36] org.apache.zookeeper.server.NIOServerCnxnFactory.run (NIOServerCnxnFactory.java:202) [37] java.lang.Thread.run (Thread.java:745) {noformat} This doesn't happen when we use JGSS, I think because adding credential to privateCredential set for every connection is causing Subject.doAS() to take much longer time. |
100% | 100% | 1800 | 0 | patch, pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 11 weeks ago | Fix slowness in connections when setup with native GSSAPI. | kerberos, native-gssapi | 0|i2grxb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2229 | Several four-letter words are undocumented. |
Bug | Closed | Major | Fixed | Chris Nauroth | Chris Nauroth | Chris Nauroth | 01/Jul/15 20:18 | 21/Jul/16 16:18 | 10/Dec/15 01:16 | 3.4.8, 3.5.2, 3.6.0 | documentation | 0 | 5 | ZOOKEEPER-2227 | The {{isro}}, {{gtmk}} and {{stmk}} commands are not covered in the four-letter word documentation. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 15 weeks ago | 0|i2grkn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2228 | WorkerReceiver's main loop (in FastLeaderElection's) should break loop upon restart |
Bug | Resolved | Major | Not A Problem | Unassigned | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 01/Jul/15 19:46 | 02/Jul/15 21:40 | 02/Jul/15 21:40 | server | 0 | 3 | It seems like in FastLeaderElection#Messenger#WorkerReceiver the main loop should be left immediately after this path \[0\] is taken: {code} if (!rqv.equals(curQV)) { LOG.info("restarting leader election"); self.shuttingDownLE = true; self.getElectionAlg().shutdown(); } {code} Instead, it keeps going which means the received message would still be applied and a new message might be send out. Should there be a break statement right after self.getElectionAlg().shutdown()? Any ideas [~shralex]? \[0\]: https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/server/quorum/FastLeaderElection.java#L300 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 37 weeks, 6 days ago | 0|i2griv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2227 | stmk four-letter word fails execution at server while reading trace mask argument. |
Bug | Closed | Major | Fixed | Chris Nauroth | Chris Nauroth | Chris Nauroth | 01/Jul/15 15:22 | 21/Jul/16 16:18 | 08/Nov/15 17:31 | 3.3.0 | 3.4.7, 3.5.2, 3.6.0 | server | 0 | 7 | ZOOKEEPER-2278, ZOOKEEPER-2229, ZOOKEEPER-572 | When the server handles the {{stmk}} four-letter word, it attempts to read an 8-byte Java {{long}} from the request as the trace mask argument. The read fails, because the destination buffer's capacity is only 4 bytes. | 9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 19 weeks, 4 days ago | 0|i2gr4n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2226 | Mixing sequential and non-sequential can throw NodeExists for sequential nodes |
Bug | Open | Major | Unresolved | Unassigned | David Capwell | David Capwell | 01/Jul/15 14:22 | 23/Feb/19 03:53 | 3.4.5 | 0 | 2 | I have the following code (in curator): {code} int id = extractId(client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT_SEQUENTIAL).forPath(prefix, data)); {code} and {code} client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath(path(id), data); {code} The first part joins our cluster and gets a id from zookeeper. The second call will create a znode that looks like a znode above. The reason I do this is that I would like for ops to be able to define the ids when they want and not always have to (other code will "setData" one of the paths defined above, leaving out since thats not having issues). I created a test case and the error thrown was not what I was expecing: Node Exists Here is the test: create 4 PERSISTENT znodes with ids 1, 2, 3, 4 create 1 PERSISTENT_SEQUENTIAL znode (change id = 4, so conflicts with above) Here is the error I saw INFO 2015-07-01 10:46:46,349 [ProcessThread(sid:0 cport:-1):] [PrepRequestProcessor] [line 627] Got user-level KeeperException when processing sessionid:0x14e4aba4d490000 type:create cxid:0x25 zxid:0xe txntype:-1 reqpath:n/a Error Path:/test/MembershipTest/replaceFourRegisterOne/member-0000000004 Error:KeeperErrorCode = NodeExists for /test/MembershipTest/replaceFourRegisterOne/member-0000000004 org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /test/MembershipTest/replaceFourRegisterOne/member- ... Caused by: org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /test/MembershipTest/replaceFourRegisterOne/member- at org.apache.zookeeper.KeeperException.create(KeeperException.java:119) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:688) at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:672) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:668) at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:453) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:443) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44) When using sequential nodes, its unexpected that they can fail because a node already exists. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 3 weeks, 5 days ago | 0|i2gr1j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2225 | modify existing 'configuration' command or add new one to return list of ZooKeeper's ensemble members |
Improvement | Open | Major | Unresolved | Unassigned | Grigoriy Starchenko | Grigoriy Starchenko | 30/Jun/15 09:56 | 30/Jun/15 12:52 | server | 0 | 3 | Linux, AWS cloud | Hi, All We are building high-availability ZooKeeper cluster at AWS and using version 3.5.0 because it support dynamic re-configuration. Everything works but one problem: it is difficult for ZooKeeper clients to discover current ensemble list. Obvious solution is to put ZooKeeper behind AWS load balancer. Client during initialization will call ZooKeeper via load balancer to read /zookeeper/config and will be able to build connection string. We quickly discovered ZooKeeper API not working trough AWS load-balancer. ZooKeeper, starting from 3.5.0, support AdminServer option which working just fine behind any type of load balancers. The catch is: no command avail to date to get list of hosts representing ensemble. http://localhost:8080/commands/... provide a lot of info but none of them returns {code} server.4108=10.50.4.108:2888:3888:participant;0.0.0.0:2181 server.316=10.50.3.16:2888:3888:participant;0.0.0.0:2181 server.1215=10.50.1.215:2888:3888:participant;0.0.0.0:2181 version=100000000 {code} I think it would be very useful add new command: http://localhost:8080/commands/dconfig which will return current ZooKeeper dynamic comfiguration: {code} { "server.4108" : "10.50.4.108:2888:3888:participant;0.0.0.0:2181", "server.316" : "10.50.3.16:2888:3888:participant;0.0.0.0:2181", "server.1215" : "10.50.1.215:2888:3888:participant;0.0.0.0:2181" "version" : "100000000" } {code} Thank you, Grisha |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 38 weeks, 2 days ago | 0|i2goqv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2224 | Four letter command hangs when network is slow |
Bug | Resolved | Minor | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 30/Jun/15 06:00 | 25/Aug/15 23:40 | 06/Jul/15 11:50 | 3.4.7, 3.5.1, 3.6.0 | java client | 0 | 9 | Four letter command hangs when network is slow or network goes down in between the operation, and the application also, which calling this four letter command, hangs. | 9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 30 weeks, 1 day ago | 0|i2godz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2223 | support method-level JUnit testcase |
Improvement | Resolved | Minor | Fixed | Akihiro Suda | Akihiro Suda | Akihiro Suda | 30/Jun/15 02:26 | 28/Jul/15 02:47 | 07/Jul/15 02:46 | 3.5.1, 3.6.0 | tests | 0 | 6 | Currently, a user can execute class-level single test, but cannot execute method-level ones. This patch adds a support for method-level single test so as to facilitate ease of debugging failing tests (like ZOOKEEPER-2080). Class-level test (exists in current version) {panel} $ ant -Dtestcase=ReconfigRecoveryTest test-core-java {panel} Method-level test (proposal) {panel} $ ant -Dtestcase=ReconfigRecoveryTest -Dtest.method=testCurrentObserverIsParticipantInNewConfig test-core-java {panel} |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 |
Patch
|
4 years, 37 weeks, 2 days ago | 0|i2go0f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2222 | Fail fast if `myid` does not exist but server.N properties are defined |
Improvement | Resolved | Minor | Invalid | Unassigned | Joe Halliwell | Joe Halliwell | 26/Jun/15 12:17 | 26/Jun/15 13:01 | 26/Jun/15 13:01 | 3.4.6 | server | 0 | 1 | Under these circumstances the server logs a warning, but starts in standalone mode. I think it should exit. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 38 weeks, 6 days ago | 0|i2gjvr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2221 | Zookeeper JettyAdminServer server should start on configured IP. |
Bug | Resolved | Major | Fixed | Surendra Singh Lilhore | Surendra Singh Lilhore | Surendra Singh Lilhore | 25/Jun/15 10:07 | 28/Jul/15 02:48 | 30/Jun/15 14:51 | 3.5.0 | 3.5.1, 3.6.0 | server | 0 | 7 | Currently JettyAdminServer starting on "0.0.0.0" IP. "0.0.0.0" means "all IP addresses on the local machine". So, if your webserver machine has two ip addresses, 192.168.1.1(private) and 10.1.2.1(public), and you allow a webserver daemon like apache to listen on 0.0.0.0, it will be reachable at both of those IPs. This is security issue. webserver should be accessible from only configured IP |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 38 weeks, 1 day ago | 0|i2ghsn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2220 | Couldn't instantiate org.apache.zookeeper.ClientCnxnSocketNetty |
Bug | Open | Major | Unresolved | Unassigned | rupa mogali | rupa mogali | 24/Jun/15 13:57 | 10/Oct/17 01:50 | 3.5.0 | c client | 0 | 4 | Alpha | I am trying to test SSL connectivity between client and server following the instructions in the following page: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide But, I get the following when trying to connect to server from client.. 2015-06-24 12:14:36,589 [myid:] - INFO [main:ZooKeeper@709] - Initiating client connection, connectString=localhost:2282 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@f2a0b8e Exception in thread "main" java.io.IOException: Couldn't instantiate org.apache.zookeeper.ClientCnxnSocketNetty Can you tell me what I am doing wrong here? Very new to Zookeeper. Thanks! Reply |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 38 weeks, 3 days ago | 0|i2gg3r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2219 | ZooKeeper server should better handle SessionMovedException. |
Bug | Open | Major | Unresolved | Unassigned | Zhihai Xu | Zhihai Xu | 24/Jun/15 02:39 | 21/Nov/18 21:09 | 3.4.5 | 2 | 6 | YARN-3798 | ZooKeeper server should better handle SessionMovedException. We hit the SessionMovedException. the following is the reason for the SessionMovedException we find: 1. ZK client tried to connect to Leader L. Network was very slow, so before leader processed the request, client disconnected. 2. Client then re-connected to Follower F reusing the same session ID. It was successful. 3. The request in step 1 went into leader. Leader processed it and invalidated the connection created in step 2. But client didn't know the connection it used is invalidated. 4. Client got SessionMovedException when it used the connection invalidated by leader for any ZooKeeper operation. The following are logs: c045dkh is the Leader, c470udy is the Follower and the sessionID is 0x14be28f50f4419d. 1. ZK client try to initiate session to Leader at 015-03-16 10:59:40,735 and timeout after 10/3 seconds. logs from ZK client {code} 2015-03-16 10:59:40,078 INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from server in 6670ms for sessionid 0x14be28f50f4419d, closing socket connection and attempting reconnect 015-03-16 10:59:40,735 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server c045dkh/?.?.?.67:2181. Will not attempt to authenticate using SASL (unknown error) 2015-03-16 10:59:40,735 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to c045dkh/?.?.?.67:2181, initiating session 2015-03-16 10:59:44,071 INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from server in 3336ms for sessionid 0x14be28f50f4419d, closing socket connection and attempting reconnect {code} 2. ZK client initiated session to Follower successfully at 2015-03-16 10:59:44,688 logs from ZK client {code} 2015-03-16 10:59:44,673 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server c470udy/?.?.?.65:2181. Will not attempt to authenticate using SASL (unknown error) 2015-03-16 10:59:44,673 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to c470udy/?.?.?.65:2181, initiating session 2015-03-16 10:59:44,688 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server c470udy/?.?.?.65:2181, sessionid = 0x14be28f50f4419d, negotiated timeout = 10000 {code} logs from ZK Follower server {code} 2015-03-16 10:59:44,673 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /?.?.?.65:42777 2015-03-16 10:59:44,674 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to renew session 0x14be28f50f4419d at /?.?.?.65:42777 2015-03-16 10:59:44,674 INFO org.apache.zookeeper.server.quorum.Learner: Revalidating client: 0x14be28f50f4419d 2015-03-16 10:59:44,675 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x14be28f50f4419d with negotiated timeout 10000 for client /?.?.?.65:42777 {code} 3. At 2015-03-16 10:59:45,668, Leader processed the delayed request which is sent by Client at 2015-03-16 10:59:40,735, right after session was established, it close the socket connection/session because client was already disconnected due to timeout. logs from ZK Leader server {code} 2015-03-16 10:59:45,668 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to renew session 0x14be28f50f4419d at /?.?.?.65:50271 2015-03-16 10:59:45,668 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x14be28f50f4419d with negotiated timeout 10000 for client /?.?.?.65:50271 2015-03-16 10:59:45,670 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x14be28f50f4419d due to java.io.IOException: Broken pipe 2015-03-16 10:59:45,671 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /?.?.?.65:50271 which had sessionid 0x14be28f50f4419d {code} 4. Client got SessionMovedException at 2015-03-16 10:59:45,693 logs from ZK Leader server {code} 2015-03-16 10:59:45,693 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x14be28f50f4419d type:multi cxid:0x86e3 zxid:0x1c002a4e53 txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:null Error:KeeperErrorCode = Session moved 2015-03-16 10:59:45,695 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x14be28f50f4419d type:multi cxid:0x86e5 zxid:0x1c002a4e56 txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:null Error:KeeperErrorCode = Session moved 2015-03-16 10:59:45,700 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x14be28f50f4419d type:multi cxid:0x86e7 zxid:0x1c002a4e57 txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:null Error:KeeperErrorCode = Session moved {code} 5. At 2015-03-16 10:59:45,710, we close the session 0x14be28f50f4419d but the socket connection between ZK client and ZK Follower is closed at 2015-03-16 10:59:45,715 after session termination. logs from ZK Leader server: {code} 2015-03-16 10:59:45,710 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x14be28f50f4419d {code} logs from ZK Follower server: {code} 2015-03-16 10:59:45,715 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /?.?.?.65:42777 which had sessionid 0x14be28f50f4419d {code} It looks like Zk client is out-of-sync with ZK server. My question is how ZK client can recover from this error. It looks like the ZK Client won't be disconnected from Follower until session is closed and any ZooKeeper operation will fail with SessionMovedException before session is closed. Also since ZK Leader already closed the socket connection/session to ZK Client at step 3, why it still reject the ZooKeeper operation from client with SessionMovedException. Will it be better to endorse the session/connection between ZK client and ZK Follower? This seems like a bug to me. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 17 weeks ago | 0|i2gf4v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2218 | Close IO Streams in finally block |
Improvement | Reopened | Major | Unresolved | Tang Xinye | Tang Xinye | Tang Xinye | 22/Jun/15 22:32 | 14/Dec/19 06:08 | 3.7.0 | 0 | 6 | 0 | 600 | The problem here is that if an exception is thrown during the read process the method will exit without closing the stream and hence without releasing the file system resources, it may run out of resources before it does run. | 100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 1 year, 20 weeks, 6 days ago | Place the close method in the finally clause, so we can ensure it always runs regardless of how the method exits. | 0|i2gd9b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2217 | event might lost before re-watch |
Improvement | Resolved | Major | Not A Problem | Unassigned | Caspian | Caspian | 18/Jun/15 23:25 | 26/Jun/15 23:01 | 26/Jun/15 23:01 | 3.4.5, 3.4.6 | c client, java client | 1 | 4 | jdk1.7_45 on centos6.5 and ubuntu14.4 | I use zk to monitor the children nodes under a path, eg: /servers. when the client is told that children changes, I have to re-watch the path again, during the peroid, it's possible that some children down, or some up. And those events will be missed. For now, my temporary solution is not to use getChildren(path, true...) to get children and re-watch this path, but re-watch this path first, then get the children. Thus non events can be ignored, but I don't know what will the zk server be like if there are too much clients that act like this. How do you think of this problem? Is there any other solutions? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 38 weeks, 5 days ago | 0|i2g927: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2216 | Get the property hierarchy as a a whole tree |
Improvement | Open | Minor | Unresolved | Unassigned | Nabarun Mondal | Nabarun Mondal | 16/Jun/15 13:38 | 16/Jun/15 13:38 | 3.5.0 | c client | 0 | 1 | I am logging this as part of a feature request. We use Zookeeper - pretty extensively. Thanks for putting a pretty awesome product! This is a feature request. As of now, there is no way to ask Zookeeper to get the whole property hierarchy as a whole tree in a single call. We would be grateful if you guys can give this facility to get the whole property tree as a whole. NOTE: I personally won't mind coding this, if you guys permit me. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 40 weeks, 2 days ago | 0|i2g44f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2215 | Four letter command don't have kerberos authentication |
Bug | Open | Major | Unresolved | Unassigned | Surendra Singh Lilhore | Surendra Singh Lilhore | 12/Jun/15 08:09 | 13/Jun/15 01:34 | 0 | 2 | echo dump | netcat <IP> <port> | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 40 weeks, 5 days ago | 0|i2fz07: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2214 | Findbugs warning: LearnerHandler.packetToString Dead store to local variable |
Improvement | Resolved | Minor | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 11/Jun/15 11:46 | 13/Jun/15 06:37 | 12/Jun/15 17:04 | 3.5.1, 3.6.0 | 0 | 3 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 40 weeks, 5 days ago | 0|i2fxrb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2213 | Empty path in Set crashes server and prevents restart |
Bug | Resolved | Blocker | Fixed | Hongchao Deng | Brian Brazil | Brian Brazil | 10/Jun/15 11:29 | 29/Jun/15 13:43 | 11/Jun/15 14:16 | 3.4.5 | 3.4.7, 3.5.1, 3.6.0 | server | 0 | 7 | See https://github.com/samuel/go-zookeeper/issues/62 I've reproduced this on 3.4.5 with the code: c, _, _ := zk.Connect([]string{"127.0.0.1"}, time.Second) c.Set("", []byte{}, 0) This crashes a local zookeeper 3.4.5 server: 2015-06-10 16:21:10,862 [myid:] - ERROR [SyncThread:0:SyncRequestProcessor@151] - Severe unrecoverable error, exiting java.lang.IllegalArgumentException: Invalid path at org.apache.zookeeper.common.PathTrie.findMaxPrefix(PathTrie.java:259) at org.apache.zookeeper.server.DataTree.getMaxPrefixWithQuota(DataTree.java:634) at org.apache.zookeeper.server.DataTree.setData(DataTree.java:616) at org.apache.zookeeper.server.DataTree.processTxn(DataTree.java:807) at org.apache.zookeeper.server.ZKDatabase.processTxn(ZKDatabase.java:329) at org.apache.zookeeper.server.ZooKeeperServer.processTxn(ZooKeeperServer.java:965) at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:116) at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:167) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:101) On restart the zookeeper server crashes out: 2015-06-10 16:22:21,352 [myid:] - ERROR [main:ZooKeeperServerMain@54] - Invalid arguments, exiting abnormally java.lang.IllegalArgumentException: Invalid path at org.apache.zookeeper.common.PathTrie.findMaxPrefix(PathTrie.java:259) at org.apache.zookeeper.server.DataTree.getMaxPrefixWithQuota(DataTree.java:634) at org.apache.zookeeper.server.DataTree.setData(DataTree.java:616) at org.apache.zookeeper.server.DataTree.processTxn(DataTree.java:807) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.processTransaction(FileTxnSnapLog.java:198) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:151) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:250) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:377) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:122) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:112) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 38 weeks, 3 days ago | 0|i2fvtr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2212 | distributed race condition related to QV version |
Bug | Resolved | Critical | Fixed | Akihiro Suda | Akihiro Suda | Akihiro Suda | 10/Jun/15 02:45 | 14/Aug/15 00:12 | 15/Jun/15 19:08 | 3.5.0 | 3.5.1, 3.6.0 | quorum | 0 | 8 | When a joiner is listed as an observer in an initial config, the joiner should become a non-voting follower (not an observer) until reconfig is triggered. [(Link)|http://zookeeper.apache.org/doc/trunk/zookeeperReconfig.html#sc_reconfig_general] I found a distributed race-condition situation where an observer keeps being an observer and cannot become a non-voting follower. This race condition happens when an observer receives an UPTODATE Quorum Packet from the leader:2888/tcp *after* receiving a Notification FLE Packet of which n.config version is larger than the observer's one from leader:3888/tcp. h4. Detail * Problem: An observer cannot become a non-voting follower * Cause: Cannot restart FLE * Cause: In {{QuorumPeer.run()}}, cannot shutdown {{Observer}} [(Link)|https://github.com/apache/zookeeper/blob/98a3cabfa279833b81908d72f1c10ee9f598a045/src/java/main/org/apache/zookeeper/server/quorum/QuorumPeer.java#L1014] * Cause: In {{QuorumPeer.run()}}, cannot return from {{Observer.observeLeader()}} [(Link)|https://github.com/apache/zookeeper/blob/98a3cabfa279833b81908d72f1c10ee9f598a045/src/java/main/org/apache/zookeeper/server/quorum/QuorumPeer.java#L1010] * Cause: In {{Observer.observeLeader()}}, {{Learner.syncWithLeader()}} does not throw an exception of "changes proposed in reconfig" [(Link)|https://github.com/apache/zookeeper/blob/98a3cabfa279833b81908d72f1c10ee9f598a045/src/java/main/org/apache/zookeeper/server/quorum/Observer.java#L79] * Cause: In {{switch(qp.getType()) case UPTODATE}} of {{Learner.syncWithLeader()}} [(Link)|https://github.com/apache/zookeeper/blob/98a3cabfa279833b81908d72f1c10ee9f598a045/src/java/main/org/apache/zookeeper/server/quorum/Learner.java#L492-507], {{QuorumPeer.processReconfig()}} [(Link)|https://github.com/apache/zookeeper/blob/98a3cabfa279833b81908d72f1c10ee9f598a045/src/java/main/org/apache/zookeeper/server/quorum/QuorumPeer.java#L1644]returns false with a log message like ["2 setQuorumVerifier called with known or old config 4294967296. Current version: 4294967296"|https://github.com/osrg/earthquake/blob/v0.1/example/zk-found-bug.ether/example-output/3.REPRODUCED/zk2.log]. [(Link)|https://github.com/apache/zookeeper/blob/98a3cabfa279833b81908d72f1c10ee9f598a045/src/java/main/org/apache/zookeeper/server/quorum/QuorumPeer.java#L1369] , * Cause: The observer have already received a Notification Packet({{n.config.version=4294967296}}) and invoked {{QuorumPeer.processReconfig()}} [(Link)|https://github.com/apache/zookeeper/blob/98a3cabfa279833b81908d72f1c10ee9f598a045/src/java/main/org/apache/zookeeper/server/quorum/FastLeaderElection.java#L291-304] h4. How I found this bug I found this bug using [Earthquake|http://osrg.github.io/earthquake/], our open-source dynamic model checker for real implementations of distributed systems. Earthquakes permutes C/Java function calls, Ethernet packets, and injected fault events in various orders so as to find implementation-level bugs of the distributed system. When Earthquake finds a bug, Earthquake automatically records [the event history|https://github.com/osrg/earthquake/blob/v0.1/example/zk-found-bug.ether/example-output/3.REPRODUCED/json] and helps the user to analyze which permutation of events triggers the bug. I analyzed Earthquake's event histories and found that the bug is triggered when an observer receives an UPTODATE *after* receiving a specific kind of FLE packet. h4. How to reproduce this bug You can also easily reproduce the bug using Earthquake. I made a Docker container [osrg/earthquake-zookeeper-2212|https://registry.hub.docker.com/u/osrg/earthquake-zookeeper-2212/] on Docker hub: {code} host$ sudo modprobe openvswitch host$ docker run --privileged -t -i --rm osrg/earthquake-zookeeper-2212 guest$ ./000-prepare.sh [INFO] Starting Earthquake Ethernet Switch [INFO] Starting Earthquake Orchestrator [INFO] Starting Earthquake Ethernet Inspector [IMPORTANT] Please kill the processes (switch=1234, orchestrator=1235, and inspector=1236) after you finished all of the experiments [IMPORTANT] Please continue to 100-run-experiment.sh.. guest$ ./100-run-experiment.sh [IMPORTANT] THE BUG WAS REPRODUCED! guest$ kill -9 1234 1235 1236 {code} Note that {{--privileged}} is needed, as this container uses Docker-in-Docker. For further information about reproducing this bug, please refer to https://github.com/osrg/earthquake/blob/v0.1/example/zk-found-bug.ether |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 31 weeks, 6 days ago | https://github.com/osrg/earthquake/tree/v0.1/example/zk-found-bug.ether | 0|i2fv5r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2211 | PurgeTxnLog does not correctly purge when snapshots and logs are at different locations |
Bug | Closed | Major | Fixed | Mohammad Arshad | Wesley Chow | Wesley Chow | 09/Jun/15 16:45 | 21/Jul/16 16:18 | 03/Dec/15 23:20 | 3.4.6, 3.5.0 | 3.4.8, 3.5.2, 3.6.0 | scripts | 0 | 8 | Ubuntu 12.04, Java 1.7. | PurgeTxnLog does not work when snapshots and transaction logs are at different file paths. The argument handling is buggy and only works when both snap and datalog dirs are given, and datalog dir contains both logs and snaps (snap is ignored). | 9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 |
Patch
|
4 years, 15 weeks, 6 days ago | 0|i2ful3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2210 | clock_gettime is not available in os x |
Bug | Resolved | Major | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 09/Jun/15 00:47 | 22/Jun/15 06:46 | 21/Jun/15 20:20 | 3.5.1, 3.6.0 | c client | 0 | 6 | {noformat} src/zookeeper.c:286:9: warning: implicit declaration of function 'clock_gettime' is invalid in C99 [-Wimplicit-function-declaration] ret = clock_gettime(CLOCK_MONOTONIC, &ts); ^ src/zookeeper.c:286:23: error: use of undeclared identifier 'CLOCK_MONOTONIC' ret = clock_gettime(CLOCK_MONOTONIC, &ts); {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 39 weeks, 3 days ago | 0|i2fsrj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2209 | A .NET C# version of ZooKeeper client |
New Feature | Resolved | Major | Won't Fix | Shay Hazor | Shay Hazor | Shay Hazor | 08/Jun/15 03:19 | 13/Oct/16 16:38 | 31/Oct/15 03:24 | 3.4.6 | 0 | 6 | 0 | 0 | 0% | .NET CoreCLR | Inspired by the work of [~ewhauser] . I propose a C# Client that supports the current stable version of ZK 3.4.6. It was built by using static code conversion tools followed by manual editing and C# implementations of java selector and other java constructs. A great measure was taken to follow the logic of the java version. In fact, the code is almost identical. Thus allowing easy evolution alongside the java version. Main features: * fully .NET async, no explicit threads used * all relevant unit tests have been converted and passing consistently * Code is 100% CoreCLR compliant * [NuGet package|https://www.nuget.org/packages/ZooKeeperNetEx] is already integrated in [Microsoft Project Orleans|https://github.com/dotnet/orleans] as the only open-source membership provider. * [Nuget package for recipes|https://www.nuget.org/packages/ZooKeeperNetEx.Recipes] |
0% | 0% | 0 | 0 | .NET, CoreCLR, async, c# | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 23 weeks ago | a ZooKeeper .NET async Client for ZK v3.4.6 Current Limitations: * No support for system properties (currently the defaults are used). * No SASL support |
31 | https://github.com/apache/zookeeper/pull/31 | 0|i2fqsn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2208 | Log type of unexpected quorum packet in observer loop |
Improvement | Resolved | Trivial | Fixed | Hitoshi Mitake | Akihiro Suda | Akihiro Suda | 05/Jun/15 04:26 | 07/Jun/15 21:25 | 05/Jun/15 15:31 | 3.5.0 | 3.5.1, 3.6.0 | server | 0 | 3 | ZOOKEEPER-2205 | This patch lets the observer loop log the type of packet for debugging. This issue is tightly related to ZOOKEEPER-2205 |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
4 years, 41 weeks, 3 days ago | 0|i2fo2f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2207 | Enhance error logs with LearnerHandler.packetToString() |
Improvement | Resolved | Trivial | Fixed | Hitoshi Mitake | Hitoshi Mitake | Hitoshi Mitake | 04/Jun/15 03:52 | 07/Jun/15 02:04 | 05/Jun/15 15:18 | 3.5.0 | 3.5.1, 3.6.0 | server | 0 | 3 | This patch enhances error logs related to unexpected types of QuorumPacket with LearnerHandler.packetToString(). | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 |
Patch
|
4 years, 41 weeks, 4 days ago | 0|i2fmb3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2206 | Add missing packet types to LearnerHandler.packetToString() |
Improvement | Resolved | Trivial | Fixed | Hitoshi Mitake | Hitoshi Mitake | Hitoshi Mitake | 04/Jun/15 03:44 | 07/Jun/15 02:04 | 05/Jun/15 15:01 | 3.5.0 | 3.5.1, 3.6.0 | server | 0 | 3 | packetToString() is a method which is suitable for obtaining string representation of QuorumPacket. But it lacks some types of QuorumPacket. This patch adds the missing types and enhance the method for more friendly logging. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 |
Patch
|
4 years, 41 weeks, 4 days ago | 0|i2fman: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2205 | Log type of unexpected quorum packet in learner handler loop |
Improvement | Resolved | Trivial | Fixed | Hitoshi Mitake | Hitoshi Mitake | Hitoshi Mitake | 04/Jun/15 01:44 | 07/Jun/15 02:04 | 05/Jun/15 14:45 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | server | 0 | 5 | ZOOKEEPER-2208 | Current learner handler loop doesn't log anything when it receives unexpected type of quorum packet from learner. This patch lets the learner handler loop log the type of packet for defensive purpose. It would make debugging and trouble shooting a little bit easier. |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 |
Patch
|
4 years, 41 weeks, 4 days ago | 0|i2fm6n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2204 | LearnerSnapshotThrottlerTest.testHighContentionWithTimeout fails occasionally |
Test | Resolved | Minor | Fixed | Donny Nadolny | Donny Nadolny | Donny Nadolny | 03/Jun/15 12:04 | 19/May/18 19:50 | 04/Jun/15 13:40 | 3.5.0 | 3.5.1, 3.6.0 | 0 | 5 | ZOOKEEPER-3047 | The {{LearnerSnapshotThrottler}} will only allow 2 concurrent snapshots to be taken, and if there are already 2 snapshots in progress it will wait up to 200ms for one to complete. This isn't enough time for {{testHighContentionWithTimeout}} to consistently pass - on a cold JVM running just the one test I was able to get it to fail 3 times in around 50 runs. This 200ms timeout will be hit if there is a delay between a thread calling {{LearnerSnapshot snap = throttler.beginSnapshot(false);}} and {{throttler.endSnapshot();}}. This also erroneously fails on the build server, see https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2747/testReport/org.apache.zookeeper.server.quorum/LearnerSnapshotThrottlerTest/testHighContentionWithTimeout/ for an example. I have bumped the timeout up to 5 seconds (which should be more than enough for warmup / gc pauses), as well as added logging to the {{catch (Exception e)}} block to assist in debugging any future issues. An alternate approach would be to separate out results gathered from the threads, because although we only record true/false there are really three outcomes: 1. The {{snapshotNumber}} was <= 2, meaning the individual call operated correctly 2. The {{snapshotNumber}} was > 2, meaning the test should definitely fail 3. We were unable to snapshot in the time given, so we can't determine if we should fail or pass (although if we have "enough" successes from #1 with no failures from #2 maybe we would pass the test anyway). Bumping up the timeout is easier. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 |
Patch
|
4 years, 41 weeks, 6 days ago | 0|i2fl1r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2203 | multiple leaders can be elected when configs conflict |
Bug | Resolved | Major | Not A Problem | Unassigned | Akihiro Suda | Akihiro Suda | 02/Jun/15 23:36 | 16/Jun/15 14:17 | 16/Jun/15 14:17 | 3.5.0 | leaderElection | 0 | 2 | This sequence leads the ensemble to a split-brain state: * Start server 1 (config=1:participant, 2:participant, 3:participant) * Start server 2 (config=1:participant, 2:participant, 3:participant) * 1 and 2 believe 2 is the leader * Start server 3 (config=1:observer, 2:observer, 3:participant) * 3 believes 3 is the leader, although 1 and 2 still believe 2 is the leader Such a split-brain ensemble is very unstable. Znodes can be lost easily: * Create some znodes on 2 * Restart 1 and 2 * 1, 2 and 3 can think 3 is the leader * znodes created on 2 are lost, as 1 and 2 sync with 3 I consider this behavior as a bug and that ZK should fail gracefully if a participant is listed as an observer in the config. In current implementation, ZK cannot detect such an invalid config, as FastLeaderElection.sendNotification() sends notifications to only voting members and hence there is no message from observers(1 and 2) to the new voter (3). I think FastLeaderElection.sendNotification() should send notifications to all the members and FastLeaderElection.Messenger.WorkerReceiver.run() should verify acks. Any thoughts? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 40 weeks, 2 days ago | ZOOKEEPER-368 | 0|i2fk0f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2202 | Cluster crashes when reconfig adds an unreachable observer |
Bug | Patch Available | Major | Unresolved | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 02/Jun/15 17:44 | 05/Feb/20 07:11 | 3.5.0, 3.6.0 | 3.7.0, 3.5.8 | 0 | 7 | While adding support for reconfig() in Kazoo (https://github.com/python-zk/kazoo/pull/333) I found that the cluster can be crashed if you add an observer whose election port isn't reachable (i.e.: packets for that destination are dropped, not rejected). This will raise a SocketTimeoutException which will bring down the PrepRequestProcessor: {code} 2015-06-02 14:37:16,473 [myid:3] - WARN [ProcessThread(sid:3 cport:-1)::QuorumCnxManager@384] - Cannot open channel to 100 at election address /8.8.8.8:38703 java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:369) at org.apache.zookeeper.server.quorum.QuorumPeer.connectNewPeers(QuorumPeer.java:1288) at org.apache.zookeeper.server.quorum.QuorumPeer.setLastSeenQuorumVerifier(QuorumPeer.java:1315) at org.apache.zookeeper.server.quorum.Leader.propose(Leader.java:1056) at org.apache.zookeeper.server.quorum.ProposalRequestProcessor.processRequest(ProposalRequestProcessor.java:78) at org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProcessor.java:877) at org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:143) {code} A simple repro can be obtained by using the code in the referenced pull request above and using 8.8.8.8:3888 (for example) instead of a free (but closed) port in the loopback. I think that adding an Observer (or a Participant) that isn't currently reachable is a valid use case (i.e.: you are provisioning the machine and it's not currently needed) so I think we could handle this with lower connect timeouts, not sure. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 17 weeks, 1 day ago | 0|i2fjgv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2201 | Network issues can cause cluster to hang due to near-deadlock |
Bug | Closed | Critical | Fixed | Donny Nadolny | Donny Nadolny | Donny Nadolny | 01/Jun/15 21:30 | 21/Jul/16 16:18 | 06/Jun/15 12:54 | 3.4.6, 3.5.0 | 3.4.7, 3.5.2, 3.6.0 | 0 | 10 | {{DataTree.serializeNode}} synchronizes on the {{DataNode}} it is about to serialize then writes it out via {{OutputArchive.writeRecord}}, potentially to a network connection. Under default linux TCP settings, a network connection where the other side completely disappears will hang (blocking on the {{java.net.SocketOutputStream.socketWrite0}} call) for over 15 minutes. During this time, any attempt to create/delete/modify the {{DataNode}} will cause the leader to hang at the beginning of the request processor chain: {noformat} "ProcessThread(sid:5 cport:-1):" prio=10 tid=0x00000000026f1800 nid=0x379c waiting for monitor entry [0x00007fe6c2a8c000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.zookeeper.server.PrepRequestProcessor.getRecordForPath(PrepRequestProcessor.java:163) - waiting to lock <0x00000000d4cd9e28> (a org.apache.zookeeper.server.DataNode) - locked <0x00000000d2ef81d0> (a java.util.ArrayList) at org.apache.zookeeper.server.PrepRequestProcessor.pRequest2Txn(PrepRequestProcessor.java:345) at org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProcessor.java:534) at org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:131) {noformat} Additionally, any attempt to send a snapshot to a follower or to disk will hang. Because the ping packets are sent by another thread which is unaffected, followers never time out and become leader, even though the cluster will make no progress until either the leader is killed or the TCP connection times out. This isn't exactly a deadlock since it will resolve itself eventually, but as mentioned above this will take > 15 minutes with the default TCP retry settings in linux. A simple solution to this is: in {{DataTree.serializeNode}} we can take a copy of the contents of the {{DataNode}} (as is done with its children) in the synchronized block, then call {{writeRecord}} with the copy of the {{DataNode}} outside of the original {{DataNode}} synchronized block. |
9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 4 years, 41 weeks, 4 days ago | 0|i2fhnj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2200 | Perl ZooKeeper locks up during heavy load |
Bug | Open | Major | Unresolved | Unassigned | KARA VAN HORN | KARA VAN HORN | 01/Jun/15 15:02 | 30/Mar/16 12:42 | 3.4.6 | c client | 0 | 2 | CentOS 5.8 | We are using Perl Net::ZooKeeper (0.38) and Net::ZooKeeper::Lock (0.03) libraries. Deadlock appears to occur at the end during lock cleanup activity. Here is a stack dump (sensitive names changed): Thread 2 (Thread 0x2ac6fbfa3940 (LWP 13292)): #0 0x00002ac6f5aed654 in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x00002ac6f5aeb47b in pthread_cond_signal@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #2 0x00002ac6f835539c in _zk_watcher (handle=<value optimized out>, type=2, state=3, path=<value optimized out>, context=0x33f3ce0) at ZooKeeper.xs:179 #3 0x00002ac6f856d942 in do_foreach_watcher (zh=0x33e4fb0, type=2, state=3, path=0x33f3f50 "/lock/cmts/cisco_device1.net-0001851215", list=0x33ed290) at /home/myhome/rpm/BUILD/zookeeper-3.4.6/src/c/src/zk_hashtable.c:279 #4 deliverWatchers (zh=0x33e4fb0, type=2, state=3, path=0x33f3f50 "/lock/cmts/cisco_device1.net-0001851215", list=0x33ed290) at /home/myhome/rpm/BUILD/zookeeper-3.4.6/src/c/src/zk_hashtable.c:321 #5 0x00002ac6f8564966 in process_completions (zh=0x33e4fb0) at /home/myhome/rpm/BUILD/zookeeper-3.4.6/src/c/src/zookeeper.c:2114 #6 0x00002ac6f856e101 in do_completion (v=<value optimized out>) at /home/myhome/rpm/BUILD/zookeeper-3.4.6/src/c/src/mt_adaptor.c:466 #7 0x00002ac6f5ae683d in start_thread (arg=<value optimized out>) at pthread_create.c:301 #8 0x00002ac6f5dd1fcd in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x2ac6f6056af0 (LWP 12972)): #0 0x00002ac6f5ae7c65 in pthread_join (threadid=47034119371072, thread_return=0x0) at pthread_join.c:89 #1 0x00002ac6f856e7de in adaptor_finish (zh=0x33e4fb0) at /home/myhome/rpm/BUILD/zookeeper-3.4.6/src/c/src/mt_adaptor.c:293 #2 0x00002ac6f8566cdc in zookeeper_close (zh=0x33e4fb0) at /home/myhome/rpm/BUILD/zookeeper-3.4.6/src/c/src/zookeeper.c:2536 #3 0x00002ac6f8357222 in XS_Net__ZooKeeper_DESTROY (my_perl=0x20df010, cv=<value optimized out>) at ZooKeeper.xs:885 #4 0x00002ac6f4b38af6 in Perl_pp_entersub () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #5 0x00002ac6f4adb8d7 in ?? () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #6 0x00002ac6f4adf720 in Perl_call_sv () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #7 0x00002ac6f4b3d3c6 in Perl_sv_clear () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #8 0x00002ac6f4b3db70 in Perl_sv_free () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #9 0x00002ac6f4b6025c in Perl_free_tmps () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #10 0x00002ac6f4adf78a in Perl_call_sv () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #11 0x00002ac6f4b3d3c6 in Perl_sv_clear () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #12 0x00002ac6f4b3db70 in Perl_sv_free () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #13 0x00002ac6f4b3b0e5 in ?? () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #14 0x00002ac6f4b3b141 in Perl_sv_clean_objs () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #15 0x00002ac6f4ae185e in perl_destruct () from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/libperl.so #16 0x0000000000401773 in main () There are about 4 out of 10,000 processes that end up in deadlock, and according to our web searches, the only reason pthread_cond_signal would lock is due to it waiting on an already destroyed condition. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 51 weeks, 1 day ago | 0|i2fgzz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2199 | Don't include unistd.h in windows |
Bug | Resolved | Major | Duplicate | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 31/May/15 19:28 | 19/Dec/19 18:02 | 01/Jun/15 02:36 | 3.5.1 | c client | 0 | 2 | Windows doesn't have unistd.h. https://builds.apache.org/view/S-Z/view/ZooKeeper/job/ZooKeeper-trunk-WinVS2008/ |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 42 weeks, 3 days ago | 0|i2ffqf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2198 | Set default test.junit.threads to 1. |
Bug | Resolved | Minor | Fixed | Chris Nauroth | Chris Nauroth | Chris Nauroth | 30/May/15 17:22 | 31/May/15 14:55 | 31/May/15 05:23 | 3.5.1, 3.6.0 | build | 0 | 4 | Some systems are seeing test failures under concurrent execution. This issue proposes to change the default {{test.junit.threads}} to 1 so that those environments continue to get consistent test runs. Jenkins and individual developer environments can set multiple threads with a command line argument, so most environments will still get the benefit of faster test runs. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 42 weeks, 4 days ago | 0|i2ff3r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2197 | non-ascii character in FinalRequestProcessor.java |
Bug | Resolved | Minor | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 30/May/15 16:25 | 02/Jun/15 06:57 | 02/Jun/15 01:25 | 3.5.1, 3.6.0 | 0 | 7 | src/java/main/org/apache/zookeeper/server/FinalRequestProcessor.java:134: error: unmappable character for encoding ASCII [javac] // was not being queued ??? ZOOKEEPER-558) properly. This happens, for example, |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 42 weeks, 2 days ago | 0|i2ff3b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2196 | Web Access link broken |
Bug | Open | Major | Unresolved | Unassigned | Rob Eden | Rob Eden | 22/May/15 17:29 | 22/May/15 17:29 | 0 | 1 | The link for "Web Access" on the [SVN site page|https://zookeeper.apache.org/svn.html] is broken. This link: http://svn.apache.org/viewcvs.cgi/zookeeper/ Should be replaced with this: https://svn.apache.org/viewvc/zookeeper (in "site/trunk/content/svn.textile") |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 43 weeks, 6 days ago | 0|i2f4jr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2195 | fsync.warningthresholdms in zoo.cfg not working |
Bug | Closed | Trivial | Fixed | Biju Nair | David Fan | David Fan | 21/May/15 03:27 | 21/Jul/16 16:18 | 20/Mar/16 14:37 | 3.4.6, 3.5.0 | 3.4.9, 3.5.2, 3.6.0 | quorum | 0 | 5 | ZOOKEEPER-2394 | Config fsync.warningthresholdms in zoo.cfg does not work. I find QuorumPeerConfig.parseProperties give fsync.warningthresholdms a prefix like "zookeeper.fsync.warningthresholdms". But in class FileTxnLog where fsync.warningthresholdms is used, code is :Long.getLong("fsync.warningthresholdms", 1000),without prefix "zookeeper.", therefore can not get fsync.warningthresholdms's value. I wonder the speed of fsync, need this config to see whether the speed is good enough. |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 4 years, 4 days ago |
Reviewed
|
0|i2f16f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2194 | Let DataNode.getChildren() return an unmodifiable view of its children set |
Improvement | Resolved | Trivial | Fixed | Hitoshi Mitake | Hitoshi Mitake | Hitoshi Mitake | 21/May/15 02:35 | 05/Jun/15 06:47 | 04/Jun/15 12:29 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | server | 0 | 5 | Current DataNode.getChildren() directly returns a pointer of its private member, children. However, the member should be modified through addChild() and removeChild(). Callers of getChildren() shouldn't modify it directly. For preventing the direct modification by the callers, this patch lets getChildren() return an unmodifiable view of its children set. If the callers try to modify directly, runtime exception will be risen. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 |
Patch
|
4 years, 41 weeks, 6 days ago | 0|i2f11r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2193 | reconfig command completes even if parameter is wrong obviously |
Bug | Resolved | Major | Fixed | Yasuhito Fukuda | Yasuhito Fukuda | Yasuhito Fukuda | 20/May/15 05:38 | 28/Jul/15 02:49 | 26/Jun/15 20:06 | 3.5.0 | 3.5.1, 3.6.0 | leaderElection, server | 0 | 9 | CentOS7 + Java7 | Even if reconfig parameter is wrong, it was confirmed to complete. refer to the following. - Ensemble consists of four nodes {noformat} [zk: vm-101:2181(CONNECTED) 0] config server.1=192.168.100.101:2888:3888:participant server.2=192.168.100.102:2888:3888:participant server.3=192.168.100.103:2888:3888:participant server.4=192.168.100.104:2888:3888:participant version=100000000 {noformat} - add node by reconfig command {noformat} [zk: vm-101:2181(CONNECTED) 9] reconfig -add server.5=192.168.100.104:2888:3888:participant;0.0.0.0:2181 Committed new configuration: server.1=192.168.100.101:2888:3888:participant server.2=192.168.100.102:2888:3888:participant server.3=192.168.100.103:2888:3888:participant server.4=192.168.100.104:2888:3888:participant server.5=192.168.100.104:2888:3888:participant;0.0.0.0:2181 version=300000007 {noformat} server.4 and server.5 of the IP address is a duplicate. In this state, reader election will not work properly. Besides, it is assumed an ensemble will be undesirable state. I think that need a parameter validation when reconfig. |
9223372036854775807 | No Perforce job exists for this issue. | 8 | 9223372036854775807 | 4 years, 38 weeks, 3 days ago | 0|i2ez9j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2192 | ZOOKEEPER-2163 Port "Introduce new ZNode type: container" to 3.4.x |
Sub-task | Patch Available | Major | Unresolved | Jordan Zimmerman | Jordan Zimmerman | Jordan Zimmerman | 19/May/15 12:57 | 19/Mar/19 08:43 | 3.4.6 | c client, java client, server | 0 | 6 | ZOOKEEPER-2163 applies to the trunk branch. This feature is too needed to wait for 3.5.x. So, port the feature to the 3.4.x branch so it can be released ahead of 3.5.x. | container_znode_type | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 33 weeks ago | 0|i2exs7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2191 | Continue supporting prior Ant versions that don't implement the threads attribute for the JUnit task. |
Improvement | Closed | Major | Fixed | Chris Nauroth | Chris Nauroth | Chris Nauroth | 14/May/15 14:32 | 21/Jul/16 16:18 | 22/May/15 01:28 | 3.5.2, 3.6.0 | build | 0 | 6 | ZOOKEEPER-2183 | ZOOKEEPER-2183 introduced usage of the threads attribute on the <junit> task call in build.xml to speed up test execution. This attribute is only available since Ant 1.9.4. However, we can continue to support older Ant versions by calling the <antversion> task and dispatching to a clone of our <junit> task call that doesn't use the threads attribute. Users of older Ant versions will get the slower single-process test execution. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 43 weeks, 6 days ago | 0|i2eqs7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2190 | In StandaloneDisabledTest, testReconfig() shouldn't take leaving servers as joining servers |
Bug | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 13/May/15 22:59 | 16/May/15 00:20 | 14/May/15 15:57 | 3.5.1, 3.6.0 | tests | 0 | 6 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 44 weeks, 6 days ago | trunk: http://svn.apache.org/viewvc?view=revision&revision=1679444 branch-3.5: http://svn.apache.org/viewvc?view=revision&revision=1679446 |
0|i2epp3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2189 | QuorumCnxManager: use BufferedOutputStream for initial msg |
Bug | Open | Major | Unresolved | Unassigned | Akihiro Suda | Akihiro Suda | 13/May/15 02:20 | 03/Jun/15 12:46 | 3.5.0 | leaderElection | 0 | 5 | ZOOKEEPER-2098 | This was original JIRA of ZOOKEEPER-2203. For project management reason, all the issues and related discussion are moved to ZOOKEEPER-2203. This JIRA is linked to ZOOKEEPER-2098. ============== This sequence leads the ensemble to a split-brain state: * Start server 1 (config=1:participant, 2:participant, 3:participant) * Start server 2 (config=1:participant, 2:participant, 3:participant) * 1 and 2 believe 2 is the leader * Start server 3 (config=1:observer, 2:observer, 3:participant) * 3 believes 3 is the leader, although 1 and 2 still believe 2 is the leader Such a split-brain ensemble is very unstable. Znodes can be lost easily: * Create some znodes on 2 * Restart 1 and 2 * 1, 2 and 3 can think 3 is the leader * znodes created on 2 are lost, as 1 and 2 sync with 3 I consider this behavior as a bug and that ZK should fail gracefully if a participant is listed as an observer in the config. In current implementation, ZK cannot detect such an invalid config, as FastLeaderElection.sendNotification() sends notifications to only voting members and hence there is no message from observers(1 and 2) to the new voter (3). I think FastLeaderElection.sendNotification() should send notifications to all the members and FastLeaderElection.Messenger.WorkerReceiver.run() should verify acks. Any thoughts? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 42 weeks, 1 day ago | ZOOKEEPER-368 | 0|i2eny7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2188 | client connection hung up because of dead loop |
Bug | Open | Major | Unresolved | Unassigned | sunhaitao | sunhaitao | 11/May/15 07:07 | 30/Jul/15 01:31 | 3.5.0 | java client | 0 | 3 | There is something wrong with the client code ClientCnxn.java, it will keep trying to connect to server in a dead loop. This is my test step, shut down zookeeper cluster, exectue zkCli.sh script to connect to zookeeper cluster, it will keep trying to connect to zookeeper server without stop. public void run() { clientCnxnSocket.introduce(this, sessionId, outgoingQueue); clientCnxnSocket.updateNow(); clientCnxnSocket.updateLastSendAndHeard(); int to; long lastPingRwServer = Time.currentElapsedTime(); final int MAX_SEND_PING_INTERVAL = 10000; //10 seconds while (state.isAlive()) { try { if (!clientCnxnSocket.isConnected()) { // don't re-establish connection if we are closing if (closing) { break; } startConnect(); clientCnxnSocket.updateLastSendAndHeard(); } public boolean isAlive() { return this != CLOSED && this != AUTH_FAILED; } because at the beginning it is CONNECTING so isAlive always returns true, which leads to dead loop. we should add some retry limit to stop this |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 34 weeks ago | 0|i2ejtz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2187 | remove duplicated code between CreateRequest{,2} |
Bug | Resolved | Minor | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 08/May/15 23:53 | 30/May/15 06:40 | 29/May/15 13:47 | 3.5.1, 3.6.0 | c client, java client, server | 1 | 6 | ZOOKEEPER-2163 | To avoid cargo culting and reducing duplicated code we can merge most of CreateRequest & CreateRequest2 given that only the Response object is actually different. This will improve readability of the code plus make it less confusing for people adding new opcodes in the future (i.e.: copying a request definition vs reusing what's already there, etc.). |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 42 weeks, 5 days ago | 0|i2eigv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2186 | QuorumCnxManager#receiveConnection may crash with random input |
Bug | Resolved | Major | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 08/May/15 15:36 | 09/Apr/18 06:11 | 24/May/15 02:34 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | server | 0 | 13 | ZOOKEEPER-3016 | This will allocate an arbitrarily large byte buffer (and try to read it!): {code} public boolean receiveConnection(Socket sock) { Long sid = null; ... sid = din.readLong(); // next comes the #bytes in the remainder of the message int num_remaining_bytes = din.readInt(); byte[] b = new byte[num_remaining_bytes]; // remove the remainder of the message from din int num_read = din.read(b); {code} This will crash the QuorumCnxManager thread, so the cluster will keep going but future elections might fail to converge (ditto for leaving/joining members). Patch coming up in a bit. |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 4 years, 10 weeks, 2 days ago | 0|i2ehxz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2185 | Run server with -XX:+HeapDumpOnOutOfMemoryError and -XX:OnOutOfMemoryError='kill %p'. |
Improvement | Resolved | Minor | Fixed | Chris Nauroth | Chris Nauroth | Chris Nauroth | 08/May/15 13:44 | 19/Jun/15 16:30 | 18/Jun/15 15:26 | 3.5.1, 3.6.0 | documentation, scripts | 0 | 3 | Continuing to run a server process after it runs out of memory can lead to unexpected behavior. This issue proposes that we update scripts and documentation to add these JVM options: # {{-XX:+HeapDumpOnOutOfMemoryError}} for help with post-mortem analysis of why the process ran out of memory. # {{-XX:OnOutOfMemoryError='kill %p'}} to kill the JVM process, under the assumption that a process monitor will restart it. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 39 weeks, 6 days ago | 0|i2eht3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2184 | Zookeeper Client should re-resolve hosts when connection attempts fail |
Bug | Closed | Blocker | Fixed | Andor Molnar | Robert P. Thille | Robert P. Thille | 07/May/15 19:46 | 30/Jan/19 09:47 | 22/Jun/18 18:11 | 3.4.6, 3.4.7, 3.4.8, 3.4.9, 3.4.10, 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.4.11 | 3.6.0, 3.4.13, 3.5.5 | java client | 16 | 40 | 0 | 39000 | KAFKA-4041, ZOOKEEPER-338, MESOS-9113, ZOOKEEPER-1506, ZOOKEEPER-1666 | Ubuntu 14.04 host, Docker containers for Zookeeper & Kafka | Testing in a Docker environment with a single Kafka instance using a single Zookeeper instance. Restarting the Zookeeper container will cause it to receive a new IP address. Kafka will never be able to reconnect to Zookeeper and will hang indefinitely. Updating DNS or /etc/hosts with the new IP address will not help the client to reconnect as the zookeeper/client/StaticHostProvider resolves the connection string hosts at creation time and never re-resolves.
A solution would be for the client to notice that connection attempts fail and attempt to re-resolve the hostnames in the connectString. |
100% | 100% | 39000 | 0 | easyfix, patch, pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 35 weeks, 3 days ago | 0|i2eg9z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2183 | Concurrent Testing Processes and Port Assignments |
Improvement | Resolved | Major | Fixed | Chris Nauroth | Chris Nauroth | Chris Nauroth | 07/May/15 19:12 | 18/May/15 13:54 | 14/May/15 12:45 | 3.5.0 | 3.5.1, 3.6.0 | tests | 0 | 7 | ZOOKEEPER-2191 | Tests use {{PortAssignment#unique}} for assignment of the ports to bind during tests. Currently, this method works by using a monotonically increasing counter from a static starting point. Generally, this is sufficient to achieve uniqueness within a single JVM process, but it does not achieve uniqueness across multiple processes on the same host. This can cause tests to get bind errors if there are multiple pre-commit jobs running concurrently on the same Jenkins host. This also prevents running tests in parallel to improve the speed of pre-commit runs. | 9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 4 years, 44 weeks, 3 days ago | 0|i2eg7r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2182 | Several test suites are not running during pre-commit, because their names do not end with "Test". |
Bug | Resolved | Major | Fixed | Chris Nauroth | Chris Nauroth | Chris Nauroth | 07/May/15 14:23 | 12/May/15 01:47 | 12/May/15 01:37 | 3.5.0 | 3.5.1, 3.6.0 | tests | 0 | 6 | ZOOKEEPER-1667, ZOOKEEPER-2017 | In build.xml, the {{<junit>}} task definition uses an include pattern of {{\*\*/\*$\{test.category\}Test.java}}. This is important so that we don't accidentally try to run utility classes like {{PortAssignment}} or {{TestableZooKeeper}} as if they were JUnit suites. However, several test suites are misnamed so that they don't satisfy this pattern, and therefore pre-commit hasn't been running them. {{ClientRetry}} {{ReconfigFailureCases}} {{WatchEventWhenAutoReset}} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 45 weeks, 2 days ago | 0|i2efkn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2181 | Slider-Zookeeper integration testcase fails with Zookeeper-3.5.0-alpha version |
Bug | Open | Major | Unresolved | Unassigned | Ayappan | Ayappan | 05/May/15 07:16 | 03/Jun/15 07:07 | 3.5.0 | 0 | 1 | SLIDER-862 | TestZKIntegration testcase in slider fails with zookeeper-3.5.0-alpha version. From the logs, it came to know the state change went to LOST rather than CONNECTED while creating ZK path. The above testcase passes with zookeeper-3.4.6. A slider jira SLIDER-862 is already opened for this. But the problem seems to be with zookeeper-3.5.0-alpha. Running org.apache.slider.common.tools.TestZKIntegration 2015-04-24 06:56:52,118 [Thread-2] INFO services.MicroZookeeperService (MicroZookeeperService.java:serviceStart(235)) - Starting Local Zookeeper service 2015-04-24 06:56:52,299 [Thread-2] INFO services.MicroZookeeperService (MicroZookeeperService.java:serviceStart(241)) - In memory ZK started at localhost:50577 2015-04-24 06:56:52,300 [Thread-2] INFO test.MicroZKCluster (MicroZKCluster.groovy:createCluster(53)) - Created Micro ZK cluster as localhost:50577 2015-04-24 06:56:52,492 [Thread-2] INFO imps.CuratorFrameworkImpl (CuratorFrameworkImpl.java:start(223)) - Starting 2015-04-24 06:56:52,513 [Thread-2] DEBUG zk.ZKIntegration (ZKIntegration.java:init(96)) - Binding ZK client to localhost:50577 2015-04-24 06:56:52,513 [Thread-2] INFO zk.BlockingZKWatcher (BlockingZKWatcher.java:waitForZKConnection(57)) - waiting for ZK event 2015-04-24 06:56:52,543 [Thread-2-EventThread] DEBUG zk.ZKIntegration (ZKIntegration.java:process(178)) - WatchedEvent state:Expired type:None path:null 2015-04-24 06:56:52,544 [Thread-2-EventThread] DEBUG zk.ZKIntegration (ZKIntegration.java:maybeInit(191)) - initing 2015-04-24 06:56:52,544 [Thread-2-EventThread] DEBUG zk.ZKIntegration (ZKIntegration.java:createPath(222)) - Creating ZK path /services 2015-04-24 06:56:52,545 [Thread-2-EventThread] INFO state.ConnectionStateManager (ConnectionStateManager.java:postState(194)) - State change: LOST 2015-04-24 06:56:52,546 [Thread-2-EventThread] WARN curator.ConnectionState (ConnectionState.java:handleExpiredSession(289)) - Session expired event received 2015-04-24 06:56:52,548 [ConnectionStateManager-0] WARN state.ConnectionStateManager (ConnectionStateManager.java:processEvents(212)) - There are no ConnectionStateListeners registered. 2015-04-24 06:56:52,549 [NIOWorkerThread-1] WARN server.NIOServerCnxn (NIOServerCnxn.java:doIO(368)) - Unable to read additional data from client sessionid 0x14ceb499c750000, likely client has closed socket 2015-04-24 06:56:52,550 [Thread-2-EventThread] ERROR zk.ZKIntegration (ZKIntegration.java:process(182)) - Failed to init org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /services at org.apache.zookeeper.KeeperException.create(KeeperException.java:131) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1067) at org.apache.slider.core.zk.ZKIntegration.createPath(ZKIntegration.java:223) at org.apache.slider.core.zk.ZKIntegration.mkPath(ZKIntegration.java:242) at org.apache.slider.core.zk.ZKIntegration.maybeInit(ZKIntegration.java:193) at org.apache.slider.core.zk.ZKIntegration.process(ZKIntegration.java:180) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:539) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:515) 2015-04-24 06:56:52,550 [NIOWorkerThread-3] WARN server.NIOServerCnxn (NIOServerCnxn.java:doIO(368)) - Unable to read additional data from client sessionid 0x14ceb499c750001, likely client has closed socket 2015-04-24 06:56:52,551 [Thread-2-EventThread] INFO zk.BlockingZKWatcher (BlockingZKWatcher.java:process(37)) - ZK binding callback received |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 46 weeks, 2 days ago | 0|i2eaq7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2180 | quota do not take effect in version 3.4.6 |
Bug | Open | Major | Unresolved | Unassigned | seekerak | seekerak | 04/May/15 22:04 | 23/Mar/19 04:44 | 0 | 3 | zookeeper version 3.4.6 | [zk: localhost:2181(CONNECTED) 18] listquota /mynode absolute path is /zookeeper/quota/mynode/zookeeper_limits Output quota for /mynode count=-1,bytes=100 Output stat for /mynode count=6,bytes=484 [zk: localhost:2181(CONNECTED) 19] listquota /mynode_n absolute path is /zookeeper/quota/mynode_n/zookeeper_limits Output quota for /mynode_n count=2,bytes=-1 Output stat for /mynode_n count=5,bytes=5 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 51 weeks, 5 days ago | 0|i2ea2n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2179 | Typo in Watcher.java |
Improvement | Resolved | Trivial | Fixed | Archana T | Eunchan Kim | Eunchan Kim | 04/May/15 04:47 | 02/Mar/16 20:31 | 29/May/15 15:53 | 3.4.5, 3.5.0 | 3.4.7, 3.5.0, 3.6.0 | server | 0 | 5 | at zookeeper/src/java/main/org/apache/zookeeper/Watcher.java, * implement. A ZooKeeper client will get various events from the ZooKeepr should be fixed to * implement. A ZooKeeper client will get various events from the ZooKeeper. (Zookeepr -> Zookeeper) |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 42 weeks, 5 days ago | 0|i2e8k7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2178 | Native client fails compilation on Windows. |
Bug | Resolved | Major | Fixed | Chris Nauroth | Chris Nauroth | Chris Nauroth | 02/May/15 16:29 | 14/Mar/17 00:32 | 01/Jun/15 02:48 | 3.5.0 | 3.5.1, 3.6.0 | c client | 0 | 6 | ZOOKEEPER-827, ZOOKEEPER-1626 | Windows | Due to several recent changes, the native client fails to compile on Windows: # ZOOKEEPER-827 (read-only mode) mismatched a function return type between the declaration and definition. # ZOOKEEPER-1626 (monotonic clock for tolerance to time adjustments) added an include of unistd.h, which does not exist on Windows. # Additionally, ZOOKEEPER-1626 did not implement a code path for accessing the Windows monotonic clock. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 1 week, 2 days ago | 0|i2e7sn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2177 | point to md5/sha1/asc files in releases.html |
Task | Open | Minor | Unresolved | Chris Nauroth | Michi Mutsuzaki | Michi Mutsuzaki | 30/Apr/15 02:26 | 18/Nov/15 20:05 | 0 | 3 | ZOOKEEPER-2292 | these files are not mirrored. we should link to these files in http://zookeeper.apache.org/releases.html | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 18 weeks ago | 0|i2e4in: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2176 | Unclear error message should be info not error |
Improvement | Resolved | Major | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 28/Apr/15 13:19 | 06/May/15 07:47 | 05/May/15 13:18 | 3.5.0 | 3.5.1, 3.6.0 | quorum | 0 | 4 | Hi [~shralex], Looking at the CI output of ZOOKEEPER-2163 I see this: {noformat} [exec] [junit] 2015-04-17 17:36:23,750 [myid:] - ERROR [QuorumPeer[myid=4](plain=/0:0:0:0:0:0:0:0:11235)(secure=disabled):QuorumPeer@1394] - writeToDisk == true but configFilename == null {noformat} Though looking at QuorumPeer#setQuorumVerifier I see: {noformat} if (configFilename != null) { try { String dynamicConfigFilename = makeDynamicConfigFilename( qv.getVersion()); QuorumPeerConfig.writeDynamicConfig( dynamicConfigFilename, qv, false); QuorumPeerConfig.editStaticConfig(configFilename, dynamicConfigFilename, needEraseClientInfoFromStaticConfig()); } catch (IOException e) { LOG.error("Error closing file: ", e.getMessage()); } } else { LOG.error("writeToDisk == true but configFilename == null"); } {noformat} there's no proper error handling so I guess maybe we should just make it a warning? Thoughts? |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 46 weeks, 1 day ago | 0|i2e0uv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2175 | Checksum validation for malformed packets needs to handle. |
Bug | Open | Major | Unresolved | Unassigned | Brahma Reddy Battula | Brahma Reddy Battula | 23/Apr/15 22:17 | 07/Apr/16 03:45 | 0 | 11 | HDFS-8161 | *Session Id from ZK :* 2015-04-15 21:24:54,257 | INFO | CommitProcessor:22 | Established session 0x164cb2b3e4b36ae4 with negotiated timeout 45000 for client /160.149.0.117:44586 | org.apache.zookeeper.server.ZooKeeperServer.finishSessionInit(ZooKeeperServer.java:623) 2015-04-15 21:24:54,261 | INFO | NIOServerCxn.Factory:160-149-0-114/160.149.0.114:24002 | Successfully authenticated client: authenticationID=hdfs/hadoop@HADOOP.COM; authorizationID=hdfs/hadoop@HADOOP.COM. | org.apache.zookeeper.server.auth.SaslServerCallbackHandler.handleAuthorizeCallback(SaslServerCallbackHandler.java:118) 2015-04-15 21:24:54,261 | INFO | NIOServerCxn.Factory:160-149-0-114/160.149.0.114:24002 | Setting authorizedID: hdfs/hadoop@HADOOP.COM | org.apache.zookeeper.server.auth.SaslServerCallbackHandler.handleAuthorizeCallback(SaslServerCallbackHandler.java:134) 2015-04-15 21:24:54,261 | INFO | NIOServerCxn.Factory:160-149-0-114/160.149.0.114:24002 | adding SASL authorization for authorizationID: hdfs/hadoop@HADOOP.COM | org.apache.zookeeper.server.ZooKeeperServer.processSasl(ZooKeeperServer.java:1009) 2015-04-15 21:24:54,262 | INFO | ProcessThread(sid:22 cport:-1): | Got user-level KeeperException when processing *{color:red}sessionid:0x164cb2b3e4b36ae4{color}* type:create cxid:0x3 zxid:0x20009fafc txntype:-1 reqpath:n/a Error Path:/hadoop-ha/hacluster/ActiveStandbyElectorLock Error:KeeperErrorCode = NodeExists for /hadoop-ha/hacluster/ActiveStandbyElectorLock | org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProcessor.java:648) *ZKFC Received :* ZK client 2015-04-15 21:24:54,237 | INFO | main-SendThread(160-149-0-114:24002) | Socket connection established to 160-149-0-114/160.149.0.114:24002, initiating session | org.apache.zookeeper.ClientCnxn$SendThread.primeConnection(ClientCnxn.java:854) 2015-04-15 21:24:54,257 | INFO | main-SendThread(160-149-0-114:24002) | Session establishment complete on server 160-149-0-114/160.149.0.114:24002, *{color:blue}sessionid = 0x144cb2b3e4b36ae4 {color}* , negotiated timeout = 45000 | org.apache.zookeeper.ClientCnxn$SendThread.onConnected(ClientCnxn.java:1259) 2015-04-15 21:24:54,260 | INFO | main-EventThread | EventThread shut down | org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:512) 2015-04-15 21:24:54,262 | INFO | main-EventThread | Session connected. | org.apache.hadoop.ha.ActiveStandbyElector.processWatchEvent(ActiveStandbyElector.java:547) 2015-04-15 21:24:54,264 | INFO | main-EventThread | Successfully authenticated to ZooKeeper using SASL. | org.apache.hadoop.ha.ActiveStandbyElector.processWatchEvent(ActiveStandbyElector.java:573) one bit corrupted..please check the following for same.. 144cb2b3e4b36ae4=1010001001100101100101011001111100100101100110110101011100100 164cb2b3e4b36ae4=1011001001100101100101011001111100100101100110110101011100100 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 50 weeks ago | 0|i2dp53: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2174 | JUnit4ZKTestRunner logs test failure for all exceptions even if the test method is annotated with an expected exception. |
Bug | Closed | Minor | Fixed | Chris Nauroth | Chris Nauroth | Chris Nauroth | 21/Apr/15 19:43 | 21/Jul/16 16:18 | 03/May/15 14:04 | 3.4.7, 3.5.2, 3.6.0 | tests | 0 | 6 | {{JUnit4ZKTestRunner}} wraps JUnit test method execution, and if any exception is thrown, it logs a message stating that the test failed. However, some ZooKeeper tests are annotated with {{@Test(expected=...)}} to indicate that an exception is the expected result, and thus the test passes. The runner should be aware of expected exceptions and only log if an unexpected exception occurs. | 9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 46 weeks, 3 days ago | 0|i2dkkn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2173 | ZK startup failure should be handled with proper error message |
Bug | Resolved | Major | Fixed | J.Andreina | J.Andreina | J.Andreina | 21/Apr/15 06:34 | 28/Apr/15 00:30 | 27/Apr/15 20:39 | 3.5.1, 3.6.0 | 0 | 4 | ZOOKEEPER-2156 | If any failure during zk Startup (myid file does not exist), then still zk startup returns as successful (STARTED). ZK startup failure should be handled with proper error message |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 47 weeks, 2 days ago | 0|i2dj8f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2172 | Cluster crashes when reconfig a new node as a participant |
Bug | Closed | Critical | Fixed | Mohammad Arshad | Ziyou Wang | Ziyou Wang | 21/Apr/15 00:14 | 07/Mar/18 09:37 | 08/Sep/16 17:01 | 3.5.0 | 3.5.3, 3.6.0 | leaderElection, quorum, server | 0 | 16 | ZOOKEEPER-2513, ZOOKEEPER-2855 | Ubuntu 12.04 + java 7 | The operations are quite simple: start three zk servers one by one, then reconfig the cluster to add the new one as a participant. When I add the third one, the zk cluster may enter a weird state and cannot recover. I found “2015-04-20 12:53:48,236 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@547] - Incremental reconfig” in node-1 log. So the first node received the reconfig cmd at 12:53:48. Latter, it logged “2015-04-20 12:53:52,230 [myid:1] - ERROR [LearnerHandler-/10.0.0.2:55890:LearnerHandler@580] - Unexpected exception causing shutdown while sock still open” and “2015-04-20 12:53:52,231 [myid:1] - WARN [LearnerHandler-/10.0.0.2:55890:LearnerHandler@595] - ******* GOODBYE /10.0.0.2:55890 ********”. From then on, the first node and second node rejected all client connections and the third node didn’t join the cluster as a participant. The whole cluster was done. When the problem happened, all three nodes just used the same dynamic config file zoo.cfg.dynamic.10000005d which only contained the first two nodes. But there was another unused dynamic config file in node-1 directory zoo.cfg.dynamic.next which already contained three nodes. When I extended the waiting time between starting the third node and reconfiguring the cluster, the problem didn’t show again. So it should be a race condition problem. |
9223372036854775807 | No Perforce job exists for this issue. | 34 | 9223372036854775807 | 2 years, 2 weeks, 1 day ago |
Reviewed
|
0|i2disv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2171 | avoid reverse lookups in QuorumCnxManager |
Bug | Resolved | Major | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 20/Apr/15 20:22 | 15/Feb/16 06:59 | 09/May/15 18:31 | 3.5.1, 3.6.0 | quorum | 0 | 7 | ZOOKEEPER-2367 | Apparently, ZOOKEEPER-107 (via a quick git-blame look) introduced a bunch of getHostName() calls in QCM. Besides the overhead, these can cause problems when mixed with failing/mis-configured DNS servers. It would be nice to reduce them, if that doesn't affect operational correctness. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 45 weeks, 4 days ago | 0|i2dihj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2170 | Zookeeper is not logging as per the configuration in log4j.properties |
Bug | Patch Available | Major | Unresolved | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 20/Apr/15 07:19 | 05/Feb/20 07:11 | 3.7.0, 3.5.8 | 1 | 7 | ZOOKEEPER-980 | In conf/log4j.properties default root logger is {code} zookeeper.root.logger=INFO, CONSOLE {code} Changing root logger to bellow value or any other value does not change logging effect {code} zookeeper.root.logger=DEBUG, ROLLINGFILE {code} |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 1 year, 17 weeks, 1 day ago | 0|i2dh0v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2169 | Enable creation of nodes with TTLs |
New Feature | Resolved | Major | Fixed | Jordan Zimmerman | Camille Fournier | Camille Fournier | 16/Apr/15 16:21 | 07/Jan/20 07:52 | 09/Oct/16 14:36 | 3.6.0 | 3.5.5 | c client, java client, jute, server | 8 | 20 | ZOOKEEPER-2608, ZOOKEEPER-2609 | ZOOKEEPER-2901, ZOOKEEPER-1925 | As a user, I would like to be able to create a node that is NOT tied to a session but that WILL expire automatically if action is not taken by some client within a time window. I propose this to enable clients interacting with ZK via http or other "thin clients" to create ephemeral-like nodes. Some ideas for the design, up for discussion: The node should support all normal ZK node operations including ACLs, sequential key generation, etc, however, it should not support the ephemeral flag. The node will be created with a TTL that is updated via a refresh operation. The ZK quorum will watch this node similarly to the way that it watches for session liveness; if the node is not refreshed within the TTL, it will expire. QUESTIONS: 1) Should we let the refresh operation set the TTL to a different base value? 2) If so, should the setting of the TTL to a new base value cause a watch to fire? 3) Do we want to allow these nodes to have children or prevent this similar to ephemeral nodes? |
100% | 1800 | 0 | ttl_nodes | 9223372036854775807 | No Perforce job exists for this issue. | 9 | 9223372036854775807 | 3 years, 19 weeks ago | 0|i2dd9j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2168 | ZOOKEEPER-2163 Add C APIs for new createContainer Methods |
Sub-task | Closed | Major | Fixed | Balazs Meszaros | Jordan Zimmerman | Jordan Zimmerman | 15/Apr/15 10:50 | 20/May/19 13:50 | 18/Mar/19 20:12 | 3.5.0 | 3.6.0, 3.5.5 | c client | 0 | 4 | 0 | 6000 | ZOOKEEPER-2609, ZOOKEEPER-2543 | ZOOKEEPER-2163 adds new client methods to create containers. These need to be exposed in the C client as well. | 100% | 100% | 6000 | 0 | container_znode_type, pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 2 days ago | 0|i2daef: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2167 | Restarting current leader node sometimes results in a permanent loss of quorum |
Bug | Open | Major | Unresolved | Unassigned | Mike Lundy | Mike Lundy | 14/Apr/15 20:01 | 15/Apr/15 23:50 | 3.4.6 | 0 | 2 | I'm seeing an issue where a restart of the current leader node results in a long-term / permanent loss of quorum (I've only waited 30 minutes, but it doesn't look like it's making any progress). Restarting the same instance _again_ seems to resolve the problem. To me, this looks a lot like the issue described in https://issues.apache.org/jira/browse/ZOOKEEPER-1026, but I'm filing this separately for the moment in case I am wrong. Notes on the attached log: 1) If you search for XXX in the log, you'll see where I've annotated it to include where the process was told to terminate, when it is reported to have completed that, and then the same for the start 2) To save you the trouble of figuring it out, here's the zkid <=> ip mapping: zid=1, ip=10.20.0.19 zid=2, ip=10.20.0.18 zid=3, ip=10.20.0.20 zid=4, ip=10.20.0.21 zid=5, ip=10.20.0.22 3) It's important to note that this is log is during the process of a rolling service restart to remove an instance; in this case, zid #2 / 10.20.0.18 is the one being removed, so if you see a conspicuous silence from that service, that's why. 4) I've been unable to reproduce this problem _except_ during cluster size changes, so I suspect that may be related; it's also important to note that this test is going from 5 -> 4 (which means, since we remove one and then do a rolling restart, we are actually temporarily dropping to 3). I know this is not a recommended thing (this is more of a stress test). We have seen this same problem on larger cluster sizes, it just seems easier to reproduce it on smaller sizes. 5) The log starts roughly at the point 10.20.0.21 / zid=4 wins the election during the final quorum; zid=4 is the one whose shutdown triggers the problem. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 49 weeks ago | 0|i2d9c7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2166 | backupOldConfig() doesn't check for null |
Bug | Open | Major | Unresolved | Unassigned | Jordan Zimmerman | Jordan Zimmerman | 14/Apr/15 17:00 | 14/Apr/15 17:07 | 3.5.0 | server | 0 | 2 | QuorumPeerConfig.backupOldConfig() should check if configFileStr is null or not and do nothing if it is null. This is currently breaking Apache Curator's TestingCluster. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 49 weeks, 2 days ago | 0|i2d8zz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2165 | OSGi requires package "server.quorom.flexible" be exported |
Bug | Open | Minor | Unresolved | Unassigned | Simon Kitching | Simon Kitching | 14/Apr/15 07:42 | 23/Apr/15 04:42 | quorum | 0 | 3 | Class QuoromPeer has a constructor which takes a QuorumVerifier value as a parameter. This class is defined in package "org.apache.zookeeper.server.quorum.flexible" but that package is not exported. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 48 weeks ago | 0|i2d82f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2164 | fast leader election keeps failing |
Bug | Resolved | Major | Fixed | Mate Szalay-Beko | Michi Mutsuzaki | Michi Mutsuzaki | 14/Apr/15 03:36 | 16/Mar/20 22:59 | 12/Mar/20 09:52 | 3.4.5 | 3.7.0, 3.6.1, 3.5.8 | leaderElection | 10 | 26 | 0 | 28200 | ZOOKEEPER-2080, ZOOKEEPER-3725, ZOOKEEPER-900, ZOOKEEPER-3756 | I have a 3-node cluster with sids 1, 2 and 3. Originally 2 is the leader. When I shut down 2, 1 and 3 keep going back to leader election. Here is what seems to be happening. - Both 1 and 3 elect 3 as the leader. - 1 receives votes from 3 and itself, and starts trying to connect to 3 as a follower. - 3 doesn't receive votes for 5 seconds because connectOne() to 2 doesn't timeout for 5 seconds: https://github.com/apache/zookeeper/blob/41c9fcb3ca09cd3d05e59fe47f08ecf0b85532c8/src/java/main/org/apache/zookeeper/server/quorum/QuorumCnxManager.java#L346 - By the time 3 receives votes, 1 has given up trying to connect to 3: https://github.com/apache/zookeeper/blob/41c9fcb3ca09cd3d05e59fe47f08ecf0b85532c8/src/java/main/org/apache/zookeeper/server/quorum/Learner.java#L247 I'm using 3.4.5, but it looks like this part of the code hasn't changed for a while, so I'm guessing later versions have the same issue. |
100% | 100% | 28200 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 days ago | 0|i2d7r3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2163 | Introduce new ZNode type: container |
New Feature | Resolved | Major | Fixed | Jordan Zimmerman | Jordan Zimmerman | Jordan Zimmerman | 13/Apr/15 14:19 | 07/Jan/20 07:50 | 20/Jun/15 20:06 | 3.5.0 | 3.5.1 | c client, java client, server | 6 | 22 | 0 | 600 | ZOOKEEPER-2168, ZOOKEEPER-2192 | ZOOKEEPER-2901, ZOOKEEPER-2187, ZOOKEEPER-2413, ZOOKEEPER-723, ZOOKEEPER-834 | BACKGROUND ============ A recurring problem for ZooKeeper users is garbage collection of parent nodes. Many recipes (e.g. locks, leaders, etc.) call for the creation of a parent node under which participants create sequential nodes. When the participant is done, it deletes its node. In practice, the ZooKeeper tree begins to fill up with orphaned parent nodes that are no longer needed. The ZooKeeper APIs don’t provide a way to clean these. Over time, ZooKeeper can become unstable due to the number of these nodes. CURRENT SOLUTIONS =================== Apache Curator has a workaround solution for this by providing the Reaper class which runs in the background looking for orphaned parent nodes and deleting them. This isn’t ideal and it would be better if ZooKeeper supported this directly. PROPOSAL ========= ZOOKEEPER-723 and ZOOKEEPER-834 have been proposed to allow EPHEMERAL nodes to contain child nodes. This is not optimum as EPHEMERALs are tied to a session and the general use case of parent nodes is for PERSISTENT nodes. This proposal adds a new node type, CONTAINER. A CONTAINER node is the same as a PERSISTENT node with the additional property that when its last child is deleted, it is deleted (and CONTAINER nodes recursively up the tree are deleted if empty). CANONICAL USAGE ================ {code} while ( true) { // or some reasonable limit try { zk.create(path, ...); break; } catch ( KeeperException.NoNodeException e ) { try { zk.createContainer(containerPath, ...); } catch ( KeeperException.NodeExistsException ignore) { } } } {code} |
100% | 100% | 6600 | 0 | container_znode_type, pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 12 | 9223372036854775807 | 4 years, 39 weeks, 5 days ago | 0|i2d6vb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2162 | infinite exception loop occurs when dataDir is lost |
Bug | Patch Available | Major | Unresolved | Akihiro Suda | Akihiro Suda | Akihiro Suda | 13/Apr/15 00:33 | 20/Sep/17 04:03 | 3.5.0 | server | 1 | 7 | ZOOKEEPER-2307, ZOOKEEPER-1653 | This sequence leads server.1 and server.2 to infinite exception loop. * Start server.1 and server.2 with the initial ensemble server.1=participant, server.2=observer. In this time, acceptedEpoch\[i\] == currentEpoch\[i\] == 1 for i = 1, 2. * Invoke reconfig so that acceptedEpoch\[i\] and currentEpoch\[i\] grows up to 2. * Kill server.2 * Remove dataDir of server.2 excluding the myid file. (In real production environments, both of confDir and dataDir can be lost due to reprovisioning) * Start server.2 * server.1 and server.2 enters infinite exception loop. The log (threshold is set to INFO in log4j.properties) size can reach > 100MB in 30 seconds. AFAIK, the bug can be reproduced with ZooKeeper@f5fb50ed2591ba9a24685a227bb5374759516828 (Apr 7, 2015). I made a Docker container so that people who are interested can reproduce the bug easily. (Sorry for no JUnit test right now) {noformat} $ docker run -i -t --rm akihirosuda/zookeeper-bug01 Reproducing the bug: infinite exception loop occurs when dataDir is lost * Resetting * Starting [1,2] with the initial ensemble [1] * Sleeping for 3 seconds * Invoking Reconfig [1]->[2] * Sleeping for 3 seconds * Killing server.2 (pid=10542) * Sleeping for 3 seconds * Resetting /zk02_data * Starting server.2 * Sleeping for 30 seconds /zk01_log: 81665114 bytes The log dir is extremely large. Perhaps the bug was REPRODUCED! /zk02_log: 23949367 bytes The log dir is extremely large. Perhaps the bug was REPRODUCED! * Exiting {noformat} h2. Log h3. server.1 {noformat} 2015-04-13 03:48:17,624 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):QuorumPeer@1022] - FOLLOWING 2015-04-13 03:48:17,624 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ZooKeeperServer@825] - minSessionTimeout set to 4000 2015-04-13 03:48:17,624 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ZooKeeperServer@834] - maxSessionTimeout set to 40000 2015-04-13 03:48:17,624 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):ZooKeeperServer@156] - Created server with tickTime 2000 minSession Timeout 4000 maxSessionTimeout 40000 datadir /zk01_data/version-2 snapdir /zk01_data/version-2 2015-04-13 03:48:17,624 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Follower@66] - FOLLOWING - LEADER ELECTION TOOK - 0 2015-04-13 03:48:17,625 [myid:1] - WARN [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Follower@93] - Exception when following the leader java.io.IOException: Leaders epoch, 1 is less than accepted epoch, 2 at org.apache.zookeeper.server.quorum.Learner.registerWithLeader(Learner.java:331) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:75) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1024) 2015-04-13 03:48:17,626 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):MBeanRegistry@119] - Unregister MBean [org.apache.ZooKeeperService: name0=ReplicatedServer_id1,name1=replica.1,name2=Follower] 2015-04-13 03:48:17,626 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):Follower@198] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:198) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1028) 2015-04-13 03:48:17,626 [myid:1] - DEBUG [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):LearnerZooKeeperServer@162] - ZooKeeper server is not running, so n ot proceeding to shutdown! 2015-04-13 03:48:17,626 [myid:1] - WARN [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):QuorumPeer@1071] - PeerState set to LOOKING 2015-04-13 03:48:17,626 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):QuorumPeer@946] - LOOKING 2015-04-13 03:48:17,626 [myid:1] - DEBUG [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):QuorumPeer@875] - Initializing leader election protocol... 2015-04-13 03:48:17,626 [myid:1] - DEBUG [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FastLeaderElection@790] - Updating proposal: -9223372036854775808 ( newleader), 0x100000002 (newzxid), -9223372036854775808 (oldleader), 0x100000002 (oldzxid) 2015-04-13 03:48:17,626 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FastLeaderElection@889] - New election. My id = 1, proposed zxid=0 x100000002 2015-04-13 03:48:17,626 [myid:1] - DEBUG [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FastLeaderElection@673] - Sending Notification: -922337203685477580 8 (n.leader), 0x100000002 (n.zxid), 0x2 (n.round), 2 (recipient), 1 (myid), 0x2 (n.peerEpoch) 2015-04-13 03:48:17,626 [myid:1] - DEBUG [WorkerSender[myid=1]:QuorumCnxManager@400] - There is a connection already for server 2 2015-04-13 03:48:17,627 [myid:1] - DEBUG [WorkerReceiver[myid=1]:FastLeaderElection$Messenger$WorkerReceiver@336] - Receive new notification message. My id = 1 2015-04-13 03:48:17,627 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@683] - Notification: 2 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round) , LEADING (n.state), 2 (n.sid), 0x1 (n.peerEPoch), LOOKING (my state)100000002 (n.config version) 2015-04-13 03:48:17,627 [myid:1] - DEBUG [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FastLeaderElection@812] - I'm a participant: 1 2015-04-13 03:48:17,627 [myid:1] - DEBUG [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FastLeaderElection@637] - About to leave FLE instance: leader=2, zx id=0x0, my id=1, my state=FOLLOWING 2015-04-13 03:48:17,627 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):MBeanRegistry@119] - Unregister MBean [org.apache.ZooKeeperService: name0=ReplicatedServer_id1,name1=replica.1,name2=LeaderElection] 2015-04-13 03:48:17,627 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):QuorumPeer@1022] - FOLLOWING .. {noformat} h3. server.2 {noformat} 2015-04-13 03:48:17,672 [myid:2] - ERROR [LearnerHandler-/127.0.0.1:36337:LearnerHandler@580] - Unexpected exception causing shutdown while sock still open java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:392) 2015-04-13 03:48:17,672 [myid:2] - WARN [LearnerHandler-/127.0.0.1:36337:LearnerHandler@595] - ******* GOODBYE /127.0.0.1:36337 ******** 2015-04-13 03:48:17,674 [myid:2] - DEBUG [WorkerSender[myid=2]:QuorumCnxManager@400] - There is a connection already for server 1 2015-04-13 03:48:17,676 [myid:2] - INFO [LearnerHandler-/127.0.0.1:36338:LearnerHandler@364] - Follower sid: 1 not in the current config 100000002 2015-04-13 03:48:17,676 [myid:2] - ERROR [LearnerHandler-/127.0.0.1:36338:LearnerHandler@580] - Unexpected exception causing shutdown while sock still open java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:392) 2015-04-13 03:48:17,677 [myid:2] - WARN [LearnerHandler-/127.0.0.1:36338:LearnerHandler@595] - ******* GOODBYE /127.0.0.1:36338 ******** .. {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 2 years, 26 weeks, 1 day ago | 0|i2d61b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2161 | Cleanup task fails - java.io.FileNotFoundException: /zookeeper.log (Permission Denied) |
Bug | Open | Major | Unresolved | Unassigned | Michael Chiocca | Michael Chiocca | 08/Apr/15 12:10 | 08/Apr/15 12:10 | 3.4.6 | 0 | 1 | The cleanup task fails with the following stack trace. This is happening repeatedly every time the cleanup task runs. Even the command line invocation of cleanup fails with the same stack trace. zookeeper@zoo91-node-5dw4yocu7bvj-fpjhrmhvgyhz-mnjsb4zltcy5-7588:~$ java -cp ./zookeeper-3.4.6.jar:./lib/log4j-1.2.16.jar:./lib/slf4j-log4j12-1.6.1.jar:./lib/slf4j-api-1.6.1.jar:/etc/zookeeper/conf org.apache.zookeeper.server.PurgeTxnLog /var/log/zookeeper /var/lib/zookeeper 5 log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /zookeeper.log (Permission denied) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:221) at java.io.FileOutputStream.<init>(FileOutputStream.java:142) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:547) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483) at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254) at org.apache.zookeeper.server.PurgeTxnLog.<clinit>(PurgeTxnLog.java:45) The data log dir is set to /var/log/zookeeper in the /etc/zookeeper/conf/zoo.cfg config file. But as you can see, specifying the config directory in the Java classpath doesn't help eliminate the problem. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 50 weeks, 1 day ago | 0|i2czcn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2160 | just test |
Wish | Resolved | Trivial | Invalid | Unassigned | zhu | zhu | 07/Apr/15 07:50 | 07/Apr/15 07:50 | 07/Apr/15 07:50 | 0 | 0 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 50 weeks, 2 days ago | 0|i2cwkn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2159 | Pluggable SASL Authentication |
Improvement | Open | Major | Unresolved | Yuliya Feldman | Yuliya Feldman | Yuliya Feldman | 07/Apr/15 02:59 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | java client, server | 9 | 13 | Today SASLAuthenticationProvider is used for all SASL based authentications which creates some "if/else" statements in ZookeeperSaslClient and ZookeeperSaslServer code with just Kerberos and Digest. We want to use yet another different SASL based authentication and adding one more "if/else" with some code specific just to that new way does not make much sense. Proposal is to allow to plug custom SASL Authentication mechanism(s) without further changes in Zookeeper code. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 17 weeks, 1 day ago | 0|i2cw2v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2158 | CLONE - Switch to using maven to build ZooKeeper |
Improvement | Resolved | Major | Won't Do | Hiram R. Chirino | laesunK | laesunK | 06/Apr/15 10:31 | 04/Jul/18 11:07 | 04/Jul/18 11:07 | build | 0 | 2 | ZOOKEEPER-83, ZOOKEEPER-1078, ZOOKEEPER-3021 | Maven is a great too for building java projects at the ASF. It helps standardize the build a bit since it's a convention oriented. It's dependency auto downloading would remove the need to store the dependencies in svn, and it will handle many of the suggested ASF policies like gpg signing of the releases and such. The ZooKeeper build is almost vanilla except for the jute compiler bits. Things that would need to change are: * re-organize the source tree a little so that it uses the maven directory conventions * seperate the jute bits out into seperate modules so that a maven plugin can be with it |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 1 year, 37 weeks, 1 day ago | 0|i2cusn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2157 | Upgrade option should be removed from zkServer.sh usage |
Bug | Resolved | Minor | Fixed | J.Andreina | J.Andreina | J.Andreina | 03/Apr/15 02:21 | 08/Apr/15 07:19 | 07/Apr/15 13:31 | 3.5.1, 3.6.0 | 0 | 5 | ZOOKEEPER-1193 | Upgrade option should be removed from zkServer.sh usage from trunk code Currently upgrade option is available in zkServer.sh usage , while upgrade feature is already been removed from trunk. {noformat} #:~/March_1/zookeeper/bin> ./zkServer.sh upgrade ZooKeeper JMX enabled by default Using config: /home/REX/March_1/zookeeper/bin/../conf/zoo.cfg Usage: ./zkServer.sh [--config <conf-dir>] {start|start-foreground|stop|restart|status|upgrade|print-cmd} {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 50 weeks, 1 day ago | 0|i27qrz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2156 | If JAVA_HOME is not set zk startup and fetching status command execution result misleads user. |
Bug | Closed | Major | Fixed | J.Andreina | J.Andreina | J.Andreina | 03/Apr/15 01:03 | 21/Jul/16 16:18 | 20/May/15 03:36 | 3.5.2, 3.6.0 | scripts | 0 | 6 | ZOOKEEPER-2173 | If JAVA_HOME is not set, zk startup and fetching status command execution result misleads user. 1. Eventhough zk startup has failed since JAVA_HOME is not set , on CLI it displays that zk STARTED. {noformat} #:~/Apr3rd/zookeeper-3.4.6/bin> ./zkServer.sh start JMX enabled by default Using config: /home/REX/Apr3rd/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED {noformat} 2. Fetching zk status when JAVA_HOME is not set displays that process not running . {noformat} #:~/Apr3rd/zookeeper-3.4.6/bin> ./zkServer.sh status JMX enabled by default Using config: /home/REX/Apr3rd/zookeeper-3.4.6/bin/../conf/zoo.cfg Error contacting service. It is probably not running. {noformat} |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 44 weeks, 1 day ago | 0|i27qo7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2155 | network is not good, the watcher in observer env will clear |
Bug | Resolved | Critical | Invalid | Unassigned | linking12 | linking12 | 02/Apr/15 02:35 | 26/Aug/15 16:45 | 26/Aug/15 16:45 | 3.4.6 | 3.5.0 | quorum | 0 | 4 | When I set up a ZooKeeper ensemble that uses Observers, The network is not very good. I find all of the watcher disappear. I read the source code and find: When the observer connect to leader, will dump the DataTree from leader and rebuild in observer. But the datawachers and childWatches is cleared for this operation. after i change code like: WatchManager dataWatchers = zk.getZKDatabase().getDataTree() .getDataWatches(); WatchManager childWatchers = zk.getZKDatabase().getDataTree() .getChildWatches(); zk.getZKDatabase().clear(); zk.getZKDatabase().deserializeSnapshot(leaderIs); zk.getZKDatabase().getDataTree().setDataWatches(dataWatchers); zk.getZKDatabase().getDataTree().setChildWatches(childWatchers); The watcher do not disappear |
moreinfo | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 30 weeks, 1 day ago | 3.4.6 | 0|i27opr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2154 | NPE in KeeperException |
Bug | Open | Major | Unresolved | Unassigned | Surendra Singh Lilhore | Surendra Singh Lilhore | 31/Mar/15 01:55 | 05/Feb/20 07:15 | 3.4.6 | 3.7.0, 3.5.8 | java client | 0 | 4 | KeeperException should handle exception is code is null... | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 17 weeks, 1 day ago | 0|i27kqn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2153 | ZOOKEEPER-2120 X509 Authentication Documentation |
Sub-task | Resolved | Major | Fixed | Ian Dimayuga | Hongchao Deng | Hongchao Deng | 30/Mar/15 17:00 | 11/Sep/15 12:53 | 05/May/15 13:30 | 3.5.0 | 3.5.1, 3.6.0 | 0 | 5 | 9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 46 weeks ago | 0|i27k5b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2152 | ZOOKEEPER-2135 Intermittent failure in TestReconfig.cc |
Sub-task | Closed | Major | Fixed | Michael Han | Michi Mutsuzaki | Michi Mutsuzaki | 25/Mar/15 13:25 | 17/May/17 23:43 | 24/Aug/16 14:34 | 3.5.3, 3.6.0 | c client | 0 | 9 | ZOOKEEPER-1594, ZOOKEEPER-1712 | I'm seeing this failure in the c client test once in a while: {noformat} [exec] /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/src/c/tests/TestReconfig.cc:474: Assertion: assertion failed [Expression: found != string::npos, 10.10.10.4:2004 not in newComing list] {noformat} https://builds.apache.org/job/ZooKeeper-trunk/2640/console |
reconfiguration | 9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 3 years, 30 weeks, 1 day ago |
Reviewed
|
0|i27cmn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2151 | FollowerZookeeperServer has thousands of outstanding requests stuck in CommitProcessor |
Bug | Resolved | Major | Duplicate | Unassigned | Jared Cantwell | Jared Cantwell | 24/Mar/15 15:20 | 25/Mar/15 16:35 | 25/Mar/15 16:35 | 3.5.0 | 3.5.0 | server | 0 | 2 | ZOOKEEPER-1863 | Ubuntu 12.04 | We are seeing one follower server in our quorum stuck with thousands of outstanding requests: --------------------------------------------- node04:~$ telnet 10.10.10.6 2181 Trying 10.10.10.6... Connected to 10.10.10.6. Escape character is '^]'. *stat* Zookeeper version: 3.5.0-1547702, built on 05/15/2014 03:06 GMT Clients: /10.10.10.6:60646\[0\](queued=0,recved=1,sent=0) /10.10.10.6:60648\[0\](queued=0,recved=1,sent=0) /10.10.10.6:41786\[0\](queued=1,recved=3,sent=1) Latency min/avg/max: 0/0/1887 Received: 3064156900 Sent: 3064134581 Connections: 3 *Outstanding: 24395* Zxid: 0x11050f7e4b Mode: follower Node count: 6969 Connection closed by foreign host. --------------------------------------------- When this happens, our c client is able to establish an initial connection to the server, but any request then times out. It re-establishes a connection, then times out, rinse, repeat. We are noticing this because we set up this particular client to connect directly to only one server in the quorum, so any problem with that server will be noticed. Our other clients are just connecting to the next server in the list, which is why only this client notices a problem. We were able to capture a heap dump in one instance. This is what we observed: - FollowerZookeeperServer.requestsInProcess has count ~24K - CommitProcessor.queuedRequest list has the 24K items in it, so the FinalRequestProcessor's processRequest function isn't ever getting called to complete the requests. - CommitProcessor.isWaitingForCommit()==true - CommitProcessor.committedRequests.isEmpty()==true - CommitProcessor.nextPending is a create request - CommitProcessor.currentlyCommitting is null - CommitProcessor.numRequestsProcessing is 0 - FollowerZookeeperServer, who should be calling commit() on the CommitProcessor, has no elements in its pendingTxns list, which indicates that it thinks it has already passed a COMMIT message to the CommitProcessor for every request that is stuck in the queuedRequests list and nextPending member of CommitProcessor. The CommitProcessor's run() is doing this: {quote} Thread 23510: (state = BLOCKED) java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be imprecise) org.apache.zookeeper.server.quorum.CommitProcessor.run() @bci=165, line=182 (Compiled frame) {quote} When we attached via gdb to get the dump, sockets closed that caused a new round of leader election. When this happened, the issued corrected itself since the whole FollowerZookeeperServer got restarted. I've confirmed that no time changing was happening before things got stuck 2 days before we noticed it. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 1 day ago | 0|i27aof: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2150 | Observers losing connection with active ensemble and dont recover |
Bug | Open | Major | Unresolved | Unassigned | parin jogani | parin jogani | 24/Mar/15 12:06 | 24/Mar/15 12:06 | 3.4.3 | quorum | 0 | 1 | We have pool of zookeeper machines (contains both active and observer) in version 3.4.3. We recently undated our exhibitor from 1.2.x to 1.5.4. We are seeing a strange behavior in our observers: they keep losing connection with the active ensemble and do not recover. The connection goes into CLOSE_WAIT state. Dont think there is any relation to exhibitor. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 2 days ago | 0|i27aan: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2149 | Logging of client address when socket connection established |
Improvement | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 23/Mar/15 21:12 | 25/Mar/15 07:20 | 25/Mar/15 03:50 | 3.5.1, 3.6.0 | 0 | 3 | When a socket connection is established, in ZooKeeperServer, it would print logs: "Established session 0x$(session) with negotiated timeout $(timeout) for client: $(client_hostport)" However, in client, it would only print the server address: "Socket connection established to $(server_hostport), initiating session" It would be nice to log client local address when socket connection established. Because clients will reconnect and ports is randomly assigned. We can better associate these addresses in this way. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 5 years, 1 day ago | 0|i2799z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2148 | ZOOKEEPER-2120 ZooKeeper SSL User Guide |
Sub-task | Open | Major | Unresolved | Hongchao Deng | Hongchao Deng | Hongchao Deng | 22/Mar/15 19:33 | 02/Feb/16 00:59 | 3.5.1 | java client | 1 | 9 | SSL is a new feature added in "3.5+". We have a dedicated user guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 7 weeks, 2 days ago | 0|i277hb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2147 | Remove ZOO_NOTWATCHING_EVENT |
Improvement | Open | Trivial | Unresolved | Unassigned | Tom Distler | Tom Distler | 21/Mar/15 14:12 | 13/Apr/15 13:51 | 3.5.0 | c client | 0 | 1 | All platforms | A review of the ZK code shows that the NOTWATCHING event is never raised. However, most client users wouldn't know this and would (hopefully) write code to handle the event. We ran into this in our own code, as I refactored some event handling only to find this event isn't going to occur. The responses from the community in the following discussion seem to confirm that this event should be removed: http://grokbase.com/t/zookeeper/user/1123dc333d/not-watching-event I'm prepared to submit a patch (tested in our load-test environment) if this issue is accepted. I've removed the event completely from all client code (C, Python, etc). One possibility is to leave the event definition in-place, but add a "deprecated" comment so-as not to break existing code. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 49 weeks, 3 days ago | 0|i2736f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2146 | BinaryInputArchive readString should check length before allocating memory |
Bug | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 20/Mar/15 19:07 | 17/Apr/15 20:48 | 25/Mar/15 03:40 | 3.4.7, 3.5.1, 3.6.0 | 0 | 6 | I recently observed a problem caused by malformed packets. ZK server crashed because of OutOfMemoryError. The reason is BinaryInputArchive didn't check the length before allocating memory in readString(): {code} public String readString(String tag) throws IOException { int len = in.readInt(); if (len == -1) return null; byte b[] = new byte[len]; ... {code} I suggest to add the same check as in readBuffer. |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 48 weeks, 5 days ago | 0|i272m7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2145 | Node can be seen but not deleted |
Bug | Open | Major | Unresolved | Unassigned | Frans Lawaetz | Frans Lawaetz | 18/Mar/15 16:27 | 18/Jan/16 19:40 | 3.4.6 | 0 | 6 | I have a three-server ensemble that appears to be working fine in every respect but for the fact that I can ls or get a znode but can not rmr it. >[zk: localhost:2181(CONNECTED) 0] get /accumulo/9354e975-7e2a-4207-8c7b-5d36c0e7765d/masters/goal_state CLEAN_STOP cZxid = 0x15 ctime = Fri Feb 20 13:37:59 CST 2015 mZxid = 0x72 mtime = Fri Feb 20 13:38:05 CST 2015 pZxid = 0x15 cversion = 0 dataVersion = 2 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 10 numChildren = 0 [zk: localhost:2181(CONNECTED) 1] rmr /accumulo/9354e975-7e2a-4207-8c7b-5d36c0e7765d/masters/goal_state Node does not exist: /accumulo/9354e975-7e2a-4207-8c7b-5d36c0e7765d/masters/goal_state I have run a 'stat' against all three servers and they seem properly structured with a leader and two followers. An md5sum of all zoo.cfg shows them to be identical. The problem seems localized to the accumulo/935.... directory as I can create and delete znodes outside of that path fine but not inside of it. For example: [zk: localhost:2181(CONNECTED) 12] create /accumulo/9354e975-7e2a-4207-8c7b-5d36c0e7765d/fubar asdf Node does not exist: /accumulo/9354e975-7e2a-4207-8c7b-5d36c0e7765d/fubar [zk: localhost:2181(CONNECTED) 13] create /accumulo/fubar asdf Created /accumulo/fubar [zk: localhost:2181(CONNECTED) 14] ls /accumulo/fubar [] [zk: localhost:2181(CONNECTED) 15] rmr /accumulo/fubar [zk: localhost:2181(CONNECTED) 16] Here is my zoo.cfg: tickTime=2000 initLimit=10 syncLimit=15 dataDir=/data/extera/zkeeper/data clientPort=2181 maxClientCnxns=300 autopurge.snapRetainCount=10 autopurge.purgeInterval=1 server.1=cdf61:2888:3888 server.2=cdf62:2888:3888 server.3=cdf63:2888:3888 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 9 weeks, 2 days ago | 0|i26xxb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2144 | Provide a way to update the auth info on a connection |
Improvement | Open | Major | Unresolved | Unassigned | Karol Dudzinski | Karol Dudzinski | 16/Mar/15 06:48 | 16/Mar/15 19:51 | 0 | 2 | The current auth info implementation makes it very difficult to work with expiring auth info. If a client fails over between servers, it resends its list of auth info in a FIFO order. Therefore, if any of the info has expired, it'll cause the session to be lost. There is currently no way to update or remove any existing info, only add. Any objections to adding an update or remove auth info method? An alternate solution would be for ClientCnxn.AuthData to implement an equals method that only checks the scheme field. As the AuthData is stored in a set, this would have the same effect as an update operation. However, I'm not sure if there is a use case for supplying multiple bits of AuthData for the same scheme? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 1 week, 3 days ago | 0|i26svb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2143 | ZOOKEEPER-1525 Pass the operation and path to the AuthenticationProvider |
Sub-task | Resolved | Major | Implemented | Unassigned | Karol Dudzinski | Karol Dudzinski | 16/Mar/15 06:38 | 06/Oct/16 09:30 | 06/Oct/16 09:30 | 0 | 3 | Currently, the AuthenticationProvider only gets passed the id of the client and the acl expression. If one wishes to perform auth checks based on the action or path being acted on, that needs to be included in the acl expression. This results in lots of potentially individual acl's being created which led us to find ZOOKEEPER-2141. It would be great if both the action and path were passed to the AuthenticationProvider. I understand that this needs to be completely backwards compatible. One solution that comes to mind is to create an interface which extends AuthenticationProvider but adds a new matches which takes the additional parameters. Internally, ZK would use the new interface everywhere. To preserve compatibility, ProviderRegistry could check for classes implementing the original AuthenticationProvdier interface and wrap them to allow the new interface to be used everywhere internally. Any thoughts on this approach? Happy to provide a patch to demonstrate what I mean. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 3 years, 24 weeks ago | 0|i26suv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2142 | JMX ObjectName is incorrect for observers |
Bug | Closed | Trivial | Fixed | Edward Ribeiro | Karol Dudzinski | Karol Dudzinski | 16/Mar/15 06:30 | 21/Jul/16 16:18 | 31/Oct/15 18:22 | 3.4.6, 3.5.1 | 3.4.7, 3.5.2, 3.6.0 | 0 | 8 | Observers show up in JMX as StandaloneServer rather than Observer. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 20 weeks, 1 day ago | 0|i26suf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2141 | ACL cache in DataTree never removes entries |
Bug | Closed | Blocker | Fixed | Adam Milne-Smith | Karol Dudzinski | Karol Dudzinski | 16/Mar/15 06:28 | 21/Jul/16 16:18 | 06/Apr/16 14:09 | 3.4.6 | 3.4.9, 3.5.2 | 0 | 9 | The problem and potential solutions are discussed in http://mail-archives.apache.org/mod_mbox/zookeeper-user/201502.mbox/browser I will attach a proposed patch in due course. |
9223372036854775807 | No Perforce job exists for this issue. | 8 | 9223372036854775807 | 3 years, 50 weeks, 1 day ago | 0|i26su7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2140 | NettyServerCnxn and NIOServerCnxn code should be improved |
Improvement | Resolved | Major | Fixed | Mohammad Arshad | Mohammad Arshad | Mohammad Arshad | 13/Mar/15 03:03 | 08/Aug/16 10:29 | 29/Jun/15 11:35 | 3.5.1, 3.6.0 | 0 | 5 | ZOOKEEPER-845 | Classes org.apache.zookeeper.server.NIOServerCnxn and org.apache.zookeeper.server.NettyServerCnxn have following need and scope for improvement 1) Duplicate code. These two classes have around 250 line duplicate code. All the command code is duplicated 2) Many improvement/bugFix done in one class but not done in other class. These changes should be synced For example In NettyServerCnxn {code} // clone should be faster than iteration // ie give up the cnxns lock faster AbstractSet<ServerCnxn> cnxns; synchronized (factory.cnxns) { cnxns = new HashSet<ServerCnxn>(factory.cnxns); } for (ServerCnxn c : cnxns) { c.dumpConnectionInfo(pw, false); pw.println(); } {code} In NIOServerCnxn {code} for (ServerCnxn c : factory.cnxns) { c.dumpConnectionInfo(pw, false); pw.println(); } {code} 3) NettyServerCnxn and NIOServerCnxn classes are bulky unnecessarily. Command classes have altogether different functionality, the command classes should go in different class files. If this done it will be easy to add new command with minimal change to existing classes. |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 4 years, 38 weeks, 2 days ago | 0|i26q87: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2139 | Support multiple ZooKeeper client, with different configurations, in a single JVM |
Improvement | Closed | Blocker | Fixed | Mohammad Arshad | Surendra Singh Lilhore | Surendra Singh Lilhore | 13/Mar/15 00:28 | 28/Oct/19 23:59 | 02/May/16 12:18 | 3.5.0 | 3.5.2, 3.6.0 | java client | 1 | 26 | ZOOKEEPER-2396, HBASE-14775, ZOOKEEPER-2323, ZOOKEEPER-2451, ZOOKEEPER-2667, ZOOKEEPER-2517, ZOOKEEPER-2103, KNOX-1133, ZOOKEEPER-2416, ZOOKEEPER-3593, ZOOKEEPER-1467, ZOOKEEPER-2331 | I have two ZK client in one JVM, one is secure client and second is normal client (For non secure cluster). "zookeeper.sasl.client" system property is "true" by default, because of this my second client connection is failing. We should pass all client configurations in client constructor like HDFS client. For example : {code} public ZooKeeper(String connectString, int sessionTimeout, Watcher watcher, Configuration conf) throws IOException { ...... ...... } {code} |
9223372036854775807 | No Perforce job exists for this issue. | 14 | 9223372036854775807 | 3 years, 9 weeks, 1 day ago | 0|i26q3j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2138 | ZOOKEEPER-2135 ZooKeeper C client testing is failing |
Sub-task | Resolved | Major | Duplicate | Michi Mutsuzaki | Hongchao Deng | Hongchao Deng | 11/Mar/15 17:00 | 19/Dec/19 18:01 | 25/Mar/15 13:26 | 0 | 3 | ZOOKEEPER-1893 | After testHammer has been fixed, now test-core-java is successful and test-core-cppunit emerges to fail: https://builds.apache.org/view/S-Z/view/ZooKeeper/job/ZooKeeper-trunk/2624/console https://builds.apache.org/view/S-Z/view/ZooKeeper/job/PreCommit-ZOOKEEPER-Build/2557/console I have try git bitsect under "src/c" and figure out the JIRA causing problem: * ZOOKEEPER-2114 (might be some other jira between ZK-2114 and ZK-2049) My local jenkins showed the error messages: {code} [exec] /var/lib/jenkins/workspace/zk-trunk/src/c/tests/TestClient.cc:1072: Assertion: assertion failed [Expression: ctx.waitForConnected(zk)] {code} |
sky | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 1 day ago | 0|i26npb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2137 | ZOOKEEPER-2135 Make testPortChange() less flaky |
Sub-task | Closed | Major | Fixed | Michael Han | Hongchao Deng | Hongchao Deng | 10/Mar/15 18:05 | 21/Jul/16 16:18 | 15/Jun/16 13:16 | 3.5.0 | 3.5.2, 3.6.0 | tests | 0 | 8 | ZOOKEEPER-2136, ZOOKEEPER-2000, ZOOKEEPER-2381 | The cause of flaky failure of testPortChange() is a race in sync(). I figured out it could take some time to fix sync(). Meanwhile, we can make testPortChange() less flaky by doing reconfig on the leader. We can change this back in the fix of ZOOKEEPER-2136. |
flakey | 9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 3 years, 40 weeks, 1 day ago |
Reviewed
|
0|i26lrj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2136 | ZOOKEEPER-2135 Sync() should get quorum acks. |
Sub-task | Open | Major | Unresolved | Flavio Paiva Junqueira | Hongchao Deng | Hongchao Deng | 10/Mar/15 01:27 | 14/Dec/19 06:06 | 3.5.5 | 3.7.0 | 1 | 10 | ZOOKEEPER-2137, ZOOKEEPER-1675 | Currently if the sync packet goes to leader it doesn't get quorum acks. This is a problem during reconfig and leader changes. testPortChange() flaky failure is caused by such case. I proposed to change sync() semantics to require quorum acks in any case. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 38 weeks, 1 day ago | 0|i26k3b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2135 | fix trunk build |
Bug | Open | Major | Unresolved | Michael Han | Michi Mutsuzaki | Michi Mutsuzaki | 08/Mar/15 17:16 | 14/Dec/19 06:07 | 3.7.0 | 0 | 2 | ZOOKEEPER-2000, ZOOKEEPER-2080, ZOOKEEPER-2134, ZOOKEEPER-2136, ZOOKEEPER-2137, ZOOKEEPER-2138, ZOOKEEPER-2152, ZOOKEEPER-2637 | ZOOKEEPER-2493, ZOOKEEPER-2877, ZOOKEEPER-2486, ZOOKEEPER-2529, ZOOKEEPER-2610, ZOOKEEPER-2481, ZOOKEEPER-2485, ZOOKEEPER-2497, ZOOKEEPER-2538, ZOOKEEPER-2722, ZOOKEEPER-2683, ZOOKEEPER-2487, ZOOKEEPER-2577, ZOOKEEPER-2686, ZOOKEEPER-2720, ZOOKEEPER-2746, ZOOKEEPER-1441, ZOOKEEPER-2482, ZOOKEEPER-2483, ZOOKEEPER-2484, ZOOKEEPER-2502, ZOOKEEPER-2508, ZOOKEEPER-2716 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 2 weeks, 4 days ago | 0|i26htz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2134 | ZOOKEEPER-2135 AsyncHammerTest.testHammer fails intermittently |
Sub-task | Resolved | Blocker | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 08/Mar/15 17:14 | 09/Mar/15 17:35 | 09/Mar/15 14:26 | 3.5.1, 3.6.0 | tests | 0 | 4 | The trunk build has been red for a while because of this (and ZOOKEEPER-2000 and ZOOKEEPER-2080). We should fix this sooner than later. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 2 weeks, 3 days ago | 0|i26htr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2133 | zkperl: Segmentation fault if getting a node with null value |
Bug | Closed | Major | Fixed | Botond Hejj | Botond Hejj | Botond Hejj | 05/Mar/15 11:49 | 21/Jul/16 16:18 | 11/Mar/16 01:36 | 3.4.6, 3.5.0 | 3.4.9, 3.5.2, 3.6.0 | contrib-bindings | 0 | 2 | If Node content is null: [zk: (CONNECTED) 0] get /apps null cZxid = 0x10000000d than my $data = $zk->{zkh}->get('/apps'); causing a core dump with Segmentation fault |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 1 week, 6 days ago |
Reviewed
|
zkperl | 0|i26eqv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2132 | Follower::syncWithLeader should roll logs before taking snapshot |
Bug | Open | Major | Unresolved | Unassigned | Asad Saeed | Asad Saeed | 26/Feb/15 13:49 | 04/Feb/16 07:17 | 3.4.6 | server | 0 | 4 | If multiple leader elections occur before SyncRequestProcessor takes a snapshot and rolls logs (at least 50000 transactions by default). PurgeTxnLog may inadvertently delete the current transaction log file. Follower::syncWithLeader currently takes a snapshot after it is synced with the leader but does not roll logs. If a zookeeper restart of a quorum of nodes occurs, the cluster may silently revert back to the last snapshot, loosing all transactions in the log! |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 7 weeks ago | 0|i263vr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2131 | Error:KeeperErrorCode = NoNode |
Bug | Open | Major | Unresolved | Unassigned | dirtdiver512 | dirtdiver512 | 24/Feb/15 12:25 | 24/Feb/15 12:25 | 3.5.0 | 0 | 1 | ubuntu testing with kafka | getting error below: [2015-02-24 11:11:25,226] INFO Got user-level KeeperException when processing sessionid:0x14bbc922a2b0002 type:create cxid:0x1a zxid:0xe8 txntype:-1 reqpath:n/a Error Path:/consumers/console-consumer-67319/owners Error:KeeperErrorCode = NoNode for /consumers/console-consumer-67319/owners (org.apache.zookeeper.server.PrepRequestProcessor) |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 4 weeks, 2 days ago | 0|i25zdr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2130 | Command to get ensemble summary |
New Feature | Open | Minor | Unresolved | Unassigned | Mohammad Arshad | Mohammad Arshad | 24/Feb/15 05:49 | 14/Dec/19 06:06 | 3.7.0 | 0 | 1 | It is good to have a command which can give complete summary of zookeeper ensemble. Ensemble summary should give information about who is leader, which are followers, observers. Consider a zookeeper cluster with following configurations server.1=localhost:33230:33235:participant;localhost:33222 server.2=localhost:33231:33236:participant;localhost:33223 server.3=localhost:33232:33237:participant;localhost:33224 server.4=localhost:33233:33238:participant;localhost:33225 server.5=localhost:33234:33239:participant;localhost:33226 When four servers are running and we execute esum(Ensemble Summary Command) command we should get status of all the servers and their roles Example: {quote} server.1=localhost:33230:33235:participant;localhost:33222 {color:green}FOLLOWING{color} Client Connections:1 server.2=localhost:33231:33236:participant;localhost:33223 {color:green}FOLLOWING{color} Client Connections:0 server.3=localhost:33232:33237:participant;localhost:33224 {color:red}NOT RUNNING{color} server.4=localhost:33233:33238:participant;localhost:33225 {color:green}FOLLOWING{color} Client Connections:0 server.5=localhost:33234:33239:participant;localhost:33226 {color:blue}LEADING{color} Client Connections:0 {quote} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 1 week, 2 days ago | 0|i25yyf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2129 | ruok command is not consistent with other four letter commands |
Bug | Resolved | Trivial | Not A Problem | Unassigned | Mohammad Arshad | Mohammad Arshad | 24/Feb/15 05:15 | 19/Dec/19 17:59 | 05/Mar/15 05:30 | 0 | 2 | ruok commands prints the output in the same line unlike the other four letter commands which print output in next line. Even though output is correct it is difficult to notice the output specially for a first time user. Its output should contain new line character as other four letter command's output ruok command output: {code} HOST1:/home # echo ruok | netcat 10.x.x.x 2181 imokHOST1:/home # {code} conf Command output: {code} HOST1:/home # echo conf | netcat 10.x.x.x 2181 clientPort=2181 dataDir=/tmp/zookeeper/data/version-2 dataLogDir=/tmp/zookeeper/data/version-2 tickTime=2000 ....... HOST1:/home # {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 3 weeks ago | 0|i25ywn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2128 | zoo_aremove_watchers API is incorrect |
Bug | Open | Major | Unresolved | Dave Gosselin | Dave Gosselin | Dave Gosselin | 23/Feb/15 22:46 | 05/Feb/20 07:16 | 3.6.0 | 3.7.0, 3.5.8 | 0 | 4 | The C API for zoo_aremove_watchers incorrectly specifies the seventh argument as a pointer to a function pointer. It should be simply a function pointer only. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 8 weeks, 3 days ago | 0|i25yhz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2127 | Document zkCli.sh |
Bug | Resolved | Major | Done | Unassigned | Joe Gamache | Joe Gamache | 23/Feb/15 06:48 | 04/Aug/19 08:45 | 04/Aug/19 08:45 | 0 | 2 | According to the answer provided to this stack overflow question: http://stackoverflow.com/questions/28589703/zookeeper-zkcli-sh-create-switches-documentation/28594057#28594057 the zkCli.sh script is not documented in terms of what all the switches mean. Such documentation should be provided. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 32 weeks, 4 days ago | 0|i25x4v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2126 | Improve exit log messsage of EventThread and SendThread by adding SessionId |
Improvement | Resolved | Major | Fixed | Surendra Singh Lilhore | Zhihai Xu | Zhihai Xu | 21/Feb/15 18:58 | 15/May/15 07:27 | 15/May/15 00:16 | 3.6.0 | 3.4.7, 3.5.1, 3.6.0 | java client | 0 | 5 | We saw the following out of order log when close Zookeeper client session. {code} 2015-02-16 06:01:12,985 INFO org.apache.zookeeper.ZooKeeper: Session: 0x24b8df4044005d4 closed ..................................... 2015-02-16 06:01:12,995 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down {code} This logs are very confusing if a new Zookeeper client session is created between these two logs. We may think new Zookeeper client session shutdown it EventThread instead of the old closed Zookeeper client session. Should we wait for sendThread and eventThread died in the ClientCnxn.close? We can add the following code in ClientCnxn.close. {code} sendThread.join(timeout); eventThread.join(timeout); {code} with the change, we won't interleave old closed session with new session. We can also create a new close API to support this so we won't affect the old code if people use old close API. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 44 weeks, 6 days ago | 0|i25waf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2125 | ZOOKEEPER-2120 SSL on Netty client-server communication |
Sub-task | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 20/Feb/15 17:32 | 25/Jul/19 02:42 | 17/Mar/15 13:21 | 3.5.1, 3.6.0 | 0 | 20 | SOLR-7893, ZOOKEEPER-2123, ZOOKEEPER-235, FLINK-13417 | Supporting SSL on Netty client-server communication. 1. It supports keystore and trustore usage. 2. It adds an additional ZK server port which supports SSL. This would be useful for rolling upgrade. RB: https://reviews.apache.org/r/31277/ The patch includes three files: * testing purpose keystore and truststore under "$(ZK_REPO_HOME)/src/java/test/data/ssl". Might need to create "ssl/". * latest ZOOKEEPER-2125.patch h2. How to use it You need to set some parameters on both ZK server and client. h3. Server You need to specify a listening SSL port in "zoo.cfg": {code} secureClientPort=2281 {code} Just like what you did with "clientPort". And then set some jvm flags: {code} export SERVER_JVMFLAGS="-Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory -Dzookeeper.ssl.keyStore.location=/root/zookeeper/ssl/testKeyStore.jks -Dzookeeper.ssl.keyStore.password=testpass -Dzookeeper.ssl.trustStore.location=/root/zookeeper/ssl/testTrustStore.jks -Dzookeeper.ssl.trustStore.password=testpass" {code} Please change keystore and truststore parameters accordingly. h3. Client You need to set jvm flags: {code} export CLIENT_JVMFLAGS="-Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.client.secure=true -Dzookeeper.ssl.keyStore.location=/root/zookeeper/ssl/testKeyStore.jks -Dzookeeper.ssl.keyStore.password=testpass -Dzookeeper.ssl.trustStore.location=/root/zookeeper/ssl/testTrustStore.jks -Dzookeeper.ssl.trustStore.password=testpass" {code} change keystore and truststore parameters accordingly. And then connect to the server's SSL port, in this case: {code} bin/zkCli.sh -server 127.0.0.1:2281 {code} If you have any feedback, you are more than welcome to discuss it here! |
ssl-tls | 9223372036854775807 | No Perforce job exists for this issue. | 21 | 9223372036854775807 | 1 year, 6 weeks, 1 day ago | 0|i25vkv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2124 | Allow Zookeeper version string to have underscore '_' |
Bug | Resolved | Major | Fixed | Chris Nauroth | Jerry He | Jerry He | 20/Feb/15 13:58 | 14/Sep/15 16:35 | 24/May/15 02:38 | 3.4.6 | 3.4.7, 3.5.1, 3.6.0 | 0 | 7 | ZOOKEEPER-2275, ZOOKEEPER-1604 | Using Bigtop or other RPM build for Zookeeper, there is a problem with using the hyphen '-' character in the version string: {noformat} [bigdata@bdvs1166 bigtop]$ gradle zookeeper-rpm :buildSrc:compileJava UP-TO-DATE :buildSrc:compileGroovy UP-TO-DATE :buildSrc:processResources UP-TO-DATE :buildSrc:classes UP-TO-DATE :buildSrc:jar UP-TO-DATE :buildSrc:assemble UP-TO-DATE :buildSrc:compileTestJava UP-TO-DATE :buildSrc:compileTestGroovy UP-TO-DATE :buildSrc:processTestResources UP-TO-DATE :buildSrc:testClasses UP-TO-DATE :buildSrc:test UP-TO-DATE :buildSrc:check UP-TO-DATE :buildSrc:build UP-TO-DATE :zookeeper_vardefines :zookeeper-download :zookeeper-tar Copy /home/bigdata/bigtop/dl/zookeeper-3.4.6-IBM-1.tar.gz to /home/bigdata/bigtop/build/zookeeper/tar/zookeeper-3.4.6-IBM-1.tar.gz :zookeeper-srpm error: line 64: Illegal char '-' in: Version: 3.4.6-IBM-1 :zookeeper-srpm FAILED FAILURE: Build failed with an exception. * Where: Script '/home/bigdata/bigtop/packages.gradle' line: 462 * What went wrong: Execution failed for task ':zookeeper-srpm'. > Process 'command 'rpmbuild'' finished with non-zero exit value 1 * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. BUILD FAILED {noformat} Also, according to the [rpm-maven-plugin|http://mojo.codehaus.org/rpm-maven-plugin/ident-params.html] documentation: {noformat} version The version number to use for the RPM package. By default, this is the project version. This value cannot contain a dash (-) due to contraints in the RPM file naming convention. Any specified value will be truncated at the first dash release The release number of the RPM. Beginning with release 2.0-beta-2, this is an optional parameter. By default, the release will be generated from the modifier portion of the project version using the following rules: If no modifier exists, the release will be 1. If the modifier ends with SNAPSHOT, the timestamp (in UTC) of the build will be appended to end. All instances of '-' in the modifier will be replaced with '_'. If a modifier exists and does not end with SNAPSHOT, "_1" will be appended to end. {noformat} We should allow underscore '_' as part of the version string. e.g. 3.4.6_abc_1 |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 43 weeks, 4 days ago | 0|i25v9r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2123 | ZOOKEEPER-2120 Provide implementation of X509 AuthenticationProvider |
Sub-task | Resolved | Minor | Fixed | Ian Dimayuga | Hongchao Deng | Hongchao Deng | 18/Feb/15 18:02 | 29/Mar/15 07:24 | 28/Mar/15 15:13 | 3.5.1, 3.6.0 | 0 | 7 | ZOOKEEPER-2125 | 9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 4 years, 51 weeks, 4 days ago | 0|i25scv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2122 | ZOOKEEPER-2063 Impplement SSL support in the Zookeeper C client library |
Sub-task | Resolved | Trivial | Fixed | Mate Szalay-Beko | Ashish Amarnath | Ashish Amarnath | 18/Feb/15 12:24 | 26/Nov/19 04:26 | 21/Nov/19 03:54 | 3.5.0 | 3.6.0 | c client | 5 | 12 | 0 | 51600 | ZOOKEEPER-235, ZOOKEEPER-3567 | Implement SSL support in the Zookeeper C client library to work with the secure server. |
100% | 100% | 51600 | 0 | build, pull-request-available, security, ssl-tls | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 17 weeks ago |
Incompatible change, Reviewed
|
0|i25rsv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2121 | Documentation of create method is unclear, incorrect, or both |
Bug | Open | Major | Unresolved | Unassigned | Joe Gamache | Joe Gamache | 18/Feb/15 12:08 | 16/Aug/15 04:04 | 3.4.6 | documentation | 0 | 3 | If you go to the create method documentation here: http://zookeeper.apache.org/doc/r3.4.6/api/index.html Then you see: {code} public String create(String path, byte[] data, List<ACL> acl, CreateMode createMode) throws KeeperException, InterruptedException Create a node with the given path. The node data will be the given data, and node acl will be the given acl. The flags argument specifies whether the created node will be ephemeral or not. An ephemeral node will be removed by the ZooKeeper automatically when the session associated with the creation of the node expires. The flags argument can also specify to create a sequential node. The actual path name of a sequential node will be the given path plus a suffix "i" where i is the current sequential number of the node. The sequence number is always fixed length of 10 digits, 0 padded. Once such a node is created, the sequential number will be incremented by one. {code} While there are 'path', 'data', 'acl', and 'createMode' arguments, there is no "flags argument". This documentation needs to be corrected to be clear, unambiguous, and perhaps provide and example. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 31 weeks, 4 days ago | 0|i25rrj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2120 | SSL feature on Netty |
New Feature | Open | Major | Unresolved | Hongchao Deng | Hongchao Deng | Hongchao Deng | 17/Feb/15 15:20 | 13/May/15 13:08 | 0 | 3 | ZOOKEEPER-2119, ZOOKEEPER-2123, ZOOKEEPER-2125, ZOOKEEPER-2148, ZOOKEEPER-2153 | ZOOKEEPER-2094 | As we discussed in ZOOKEEPER-2094, the SSL work would be divided into several subtask: 1. Provide implementation of X509 AuthenticationProvider 2. Modify ZooKeeper Netty server and client to support SSL 3. Modify ZooKeeperServerMain to support SSL This is the umbrella task. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 5 weeks, 2 days ago | 0|i25qlj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2119 | ZOOKEEPER-2120 Netty client docs |
Sub-task | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 17/Feb/15 11:42 | 25/Feb/15 06:15 | 24/Feb/15 12:47 | 3.5.1, 3.6.0 | 0 | 5 | ZOOKEEPER-2069 adds Netty client option. We need to add docs on how to use it. | 9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 5 years, 4 weeks, 1 day ago | 0|i25q4f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2118 | CLONE - Netty for quorum communication |
Bug | Open | Major | Unresolved | Unassigned | ysl871308 | ysl871308 | 15/Feb/15 01:32 | 01/Apr/15 01:52 | quorum | 0 | 2 | We need Netty in quorum communication to make use of SSL/auth feature in Netty. This might need more thoughts like ZOOKEEPER-901. This issue would be a good place to start discussing thoughts. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 5 weeks, 3 days ago | 0|i25nef: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2117 | "caught end of stream", server: "Stale state" of a Zk client just after connecting |
Bug | Open | Critical | Unresolved | Unassigned | Bruno Gauthier | Bruno Gauthier | 13/Feb/15 10:13 | 03/Mar/16 00:59 | 3.5.0 | c client | 0 | 2 | Windows 8.1, Windows 2012, Visual Studio 2012 | Hi All, Under WIndows 8.1 and 2012, using the ZooKeeper C client 3.5.0, when running my ZooKeeper client, just after the ZooKeeper Client is connecting with the ZooKeeper server, the ZooKeeper server is generating a “caught end of stream” exception and deciding my is ZooKeeper client is not responsive: Zookeeper.c::check_events, line 2298: ESTALE. (see log below). This problem systematically appears if the ZooKeeper DLL is NOT link statically with the Visual Studio debug version of the threaded runtime library. This is reproducible 10/10 In clear, Windows ZooKeeper C client will works only if you link your ZooKeeper DLL with the switch "/MTd" (see VS Studio->Project->Configuration properties->C/C++->Code generation->runtime library) Thanks Bruno ======================================== ZooKeeper server log ======================================== 2015-02-06 13:19:57,552 [myid:vgcclustermgr] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:31000:NIOServerCnxnFactory@197] - Accepted socket connection from /10.1.200.237:63499 2015-02-06 13:19:57,553 [myid:vgcclustermgr] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:31000:ZooKeeperServer@868] - Client attempting to establish new session at /10.1.200.237:63499 2015-02-06 13:19:57,554 [myid:vgcclustermgr] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:31000:NIOServerCnxnFactory@197] - Accepted socket connection from /10.1.200.237:63500 2015-02-06 13:19:57,554 [myid:vgcclustermgr] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:31000:ZooKeeperServer@868] - Client attempting to establish new session at /10.1.200.237:63500 2015-02-06 13:19:57,555 [myid:vgcclustermgr] - INFO [SyncThread:0:ZooKeeperServer@617] - Established session 0x14b5bfcba7b0409 with negotiated timeout 80000 for client /10.1.200.237:63499 2015-02-06 13:19:57,555 [myid:vgcclustermgr] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:31000:NIOServerCnxn@357] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x14b5bfcba7b0409, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:744) 2015-02-06 13:19:57,555 [myid:vgcclustermgr] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:31000:NIOServerCnxn@1007] - Closed socket connection for client /10.1.200.237:63499 which had sessionid 0x14b5bfcba7b0409 2015-02-06 13:19:57,559 [myid:vgcclustermgr] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:31000:NIOServerCnxnFactory@197] - Accepted socket connection from /10.1.200.237:63501 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i25lpr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2116 | zkCli.sh doesn't honor host:port parameter |
Bug | Resolved | Critical | Implemented | Surendra Singh Lilhore | Maxim Novikov | Maxim Novikov | 11/Feb/15 18:48 | 28/May/15 23:51 | 28/May/15 05:50 | 3.4.6 | 3.6.0 | java client, scripts | 1 | 4 | Ubuntu 12 | This doc http://zookeeper.apache.org/doc/r3.1.2/zookeeperStarted.html ("Connecting to ZooKeeper" section) says: Once ZooKeeper is running, you have several options for connection to it: Java: Use bin/zkCli.sh 127.0.0.1:2181 In fact, it doesn't work that way. I am running ZooKeeper with a different port to listen to client connections (2888), and this command {code} bin/zkCli.sh 127.0.0.1:2888 {code} is still trying to connect to 2181. {code:title=output|borderStyle=solid} Connecting to localhost:2181 2015-02-11 15:38:14,415 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 2015-02-11 15:38:14,421 [myid:] - INFO [main:Environment@100] - Client environment:host.name=localhost 2015-02-11 15:38:14,421 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_17 2015-02-11 15:38:14,424 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation 2015-02-11 15:38:14,424 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.7.0_17/jre 2015-02-11 15:38:14,424 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/opt/zookeeper-3.4.6/bin/../build/classes:/opt/zookeeper-3.4.6/bin/../build/lib/*.jar:/opt/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/opt/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/opt/zookeeper-3.4.6/bin/../src/java/lib/*.jar:../conf::/usr/share/antlr3/lib/antlr-3.5-complete-no-st3.jar 2015-02-11 15:38:14,425 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2015-02-11 15:38:14,425 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2015-02-11 15:38:14,425 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA> 2015-02-11 15:38:14,425 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux 2015-02-11 15:38:14,425 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64 2015-02-11 15:38:14,426 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.8.0-41-generic 2015-02-11 15:38:14,426 [myid:] - INFO [main:Environment@100] - Client environment:user.name=mnovikov 2015-02-11 15:38:14,426 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/home/mnovikov 2015-02-11 15:38:14,426 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/opt/zookeeper-3.4.6/bin 2015-02-11 15:38:14,428 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@3107eafc Welcome to ZooKeeper! 2015-02-11 15:38:14,471 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2015-02-11 15:38:14,479 [myid:] - WARN [main-SendThread(localhost:2181):ClientCnxn$SendThread@1102] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) {code} PS1 I can connect to ZK at 2888 using ZK Java client from code specifying the correct port with no issues. But CLI seems just to ignore the provided host:port parameter. PS2 Tried to run it with the pre-defined ZOOCFGDIR environment variable (to point to the path with the config file where the client port is set to 2888). No luck, same results as shown above. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 42 weeks, 6 days ago | 0|i25ihr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2115 | Initialize command succeeds even though it didn't in case of permission errors on the data directory |
Bug | Patch Available | Trivial | Unresolved | Biju Nair | Manikandan Narayanaswamy | Manikandan Narayanaswamy | 05/Feb/15 19:44 | 14/Dec/19 06:06 | 3.5.0 | 3.7.0 | scripts | 0 | 4 | while testing single user mode that the initialize command succeeds even though it didn't in case of permission errors on the data directory: {code} .... + exec /usr/lib/zookeeper/bin/zkServer-initialize.sh --myid=1 mkdir: cannot create directory `/var/lib/zookeeper/version-2': Permission denied mkdir: cannot create directory `/var/lib/zookeeper/version-2': Permission denied /usr/lib/zookeeper/bin/zkServer-initialize.sh: line 112: /var/lib/zookeeper/myid: Permission denied {code} |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 4 years, 2 weeks, 5 days ago | 0|i259nj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2114 | jute generated allocate_* functions are not externally visible |
Bug | Resolved | Major | Fixed | Tim Crowder | Tim Crowder | Tim Crowder | 05/Feb/15 03:06 | 22/Feb/15 17:52 | 22/Feb/15 16:12 | 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | c client | 0 | 4 | Some jute generated functions (e.g. allocate_ACL_vector) that should be publicly exported are given local (vs global) linkage. This is due to an incomplete regex for EXPORT_SYMBOLS in the C Makefile.am. Without allocate_ACL_vector it's not possible to set ACL lists from C. The regex should include "allocate_" : EXPORT_SYMBOLS = '(zoo_|zookeeper_|zhandle|Z|format_log_message|log_message|logLevel|deallocate_|allocate_|zerror|is_unrecoverable)' |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 4 weeks, 4 days ago | Expose jute-generated allocate_XXX functions in libzookeeper. | 0|i2582v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2113 | SSL support for ClientCnxnSocketNetty |
Bug | Open | Major | Unresolved | Hongchao Deng | Hongchao Deng | Hongchao Deng | 29/Jan/15 16:38 | 29/Jan/15 16:55 | 0 | 2 | This is a SSL feature on netty client. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 8 weeks ago | 0|i24z4v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2112 | Netty for quorum communication |
Bug | Open | Major | Unresolved | Unassigned | Hongchao Deng | Hongchao Deng | 27/Jan/15 20:00 | 01/Apr/15 01:52 | quorum | 0 | 2 | Add Netty option to replace NIO for quorum communication. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 8 weeks, 1 day ago | 0|i24vlr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2111 | Not isAlive states should be synchronized in ClientCnxn |
Bug | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 27/Jan/15 14:21 | 21/Feb/20 21:55 | 31/Jan/15 02:11 | 3.5.1, 3.6.0 | java client | 0 | 5 | ZOOKEEPER-3652 | In ClientCnxn.queuePacket, it checks variables of state and closing and then make decisions. There is toctou race in queuePacket(): {code} if (!state.isAlive() || closing) { conLossPacket(packet); } else { ... } {code} A possible race: in SendThread.run(): {code} while (state.isAlive()) { ... } cleanup(); {code} When it checks in queuePacket(), state is still alive. Then state isn't alive, SendThread.run() cleans up outgoingQueue. Then queuePacket adds packet to outgoingQueue. The packet should be waken up with exception. But it won't at this case. |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 5 years, 7 weeks, 5 days ago | 0|i24uvb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2110 | Typo fixes in the ZK documentation |
Improvement | Resolved | Minor | Fixed | Jeffrey Schroeder | Jeffrey Schroeder | Jeffrey Schroeder | 26/Jan/15 23:11 | 29/Jan/15 21:29 | 27/Jan/15 11:33 | 3.5.0 | 3.5.1, 3.6.0 | documentation | 0 | 3 | So as part of building an Aurora cluster ontop of Mesos, I wanted to learn ZK. I spent an evening reading much of the ZK documentation and noticed many grammatical or spelling errors. Being a good OSS citizen, I've went through the effort to fix them and create a patch fixing it. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 5 years, 8 weeks, 1 day ago | 0|i24ttr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2109 | Typo in src/c/src/load_gen.c |
Bug | Resolved | Trivial | Fixed | Surendra Singh Lilhore | Emmanuel Bourg | Emmanuel Bourg | 26/Jan/15 07:29 | 16/Mar/15 07:24 | 16/Mar/15 01:28 | 3.5.0 | 3.5.1, 3.6.0 | 0 | 5 | There is a minor typo in {{src/c/src/load_gen.c}}, "Succesfully" should be spelled "Successfully" | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 1 week, 3 days ago | 0|i24spb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2108 | Compilation error in ZkAdaptor.cc with GCC 4.7 or later |
Bug | Open | Minor | Unresolved | Unassigned | Emmanuel Bourg | Emmanuel Bourg | 26/Jan/15 07:19 | 11/Oct/19 11:02 | 3.4.6 | 1 | 1 | Hi, Debian and Fedora have a patch fixing a compilation failure in ZkAdaptor.cc but it doesn't appear to be fixed in the upcoming version 3.5.0. This issue is similar to ZOOKEEPER-470 and ZOOKEEPER-1795. The error is : {code} g++ -DHAVE_CONFIG_H -I. -I.. -D_FORTIFY_SOURCE=2 -I/home/ebourg/packaging/zookeeper/src/contrib/zktreeutil/../../c/include -I/home/ebourg/packaging/zookeeper/src/contrib/zktreeutil/../../c/generated -I../include -I/usr/local/include -I/usr/include -I/usr/include/libxml2 -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -MT ZkAdaptor.o -MD -MP -MF .deps/ZkAdaptor.Tpo -c -o ZkAdaptor.o ZkAdaptor.cc ZkAdaptor.cc: In member function ‘void zktreeutil::ZooKeeperAdapter::reconnect()’: ZkAdaptor.cc:220:21: error: ‘sleep’ was not declared in this scope sleep (1); {code} This is fixed by including unistd.h in ZkAdaptor.cc or ZkAdaptor.h The Debian patch: https://sources.debian.net/src/zookeeper/3.4.5%2Bdfsg-2/debian/patches/ftbfs-gcc-4.7.diff/ and the Fedora patch: http://pkgs.fedoraproject.org/cgit/zookeeper.git/tree/zookeeper-3.4.5-zktreeutil-gcc.patch |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 22 weeks, 6 days ago | 0|i24sov: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2107 | zookeeper client should support custom HostProviders |
Improvement | Resolved | Major | Fixed | Robert Kamphuis | Robert Kamphuis | Robert Kamphuis | 23/Jan/15 02:12 | 16/Mar/15 07:24 | 16/Mar/15 04:18 | 3.5.0 | 3.5.1, 3.6.0 | java client | 0 | 5 | The zookeeper client currently contains a StaticHostProvider and no means to replace it with your own implementation of the existing HostProvider interface. It would be great if the zookeeper client would enable you to create an instance with your own implementation of the HostProvider interface. We have been testing this change with our implementation of HostProvider which does the name to ip lookup at the time of finding the next() server to connect to. In our AWS based deployments, this enables that applications can actually connect to swapped out zookeeper-servers which typically get a new ip address. With the current StaticHostProvider in practice you will need to restart the application to see the replaced zookeeper as the application continues to try to connect to the old no longer existing IP address. With this little change plus a straightforward implementation of the HostProvider interface (our LateResolvingHostProvider), we can replace the zookeeper servers one at a time, re-assigning the elastic ips we use for the zookeeper servers, and have the application servers re-connect to the zookeeper cluster including the replaced ones without any downtime and without having to rely on the elastic ips for client to zookeeper-server connections. The reason for not using elastic-ips in the connect-strings but use the names which map to the (changing) private ips is to be able to rely on security-groups to control access. While this seems very specific for AWS, this anyway seems a generic improvement for other deployments where the mapping from server name to IP address is not static. |
9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 5 years, 1 week, 3 days ago | 0|i24pnj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2106 | Error when reading from leader causes JVM to hang |
Bug | Resolved | Critical | Invalid | Unassigned | Robert Joseph Evans | Robert Joseph Evans | 13/Jan/15 11:28 | 13/Jan/15 11:54 | 13/Jan/15 11:54 | 3.4.5 | 0 | 1 | I tried looking through existing JIRA for something like this, but the closest I came was ZOOKEEPER-2104. It looks very similar, but I don't know if it really is the same thing. Essentially we had a 5 node ensemble for a large storm cluster. For a few of the nodes at the same time they get an error that looks like. {code} WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@762] - Connection broken for id 2, my id = 4, error = java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:747) WARN [RecvWorker:2:QuorumCnxManager$RecvWorker@765] - Interrupting SendWorker WARN [SendWorker:2:QuorumCnxManager$SendWorker@679] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:831) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:62) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:667) WARN [SendWorker:2:QuorumCnxManager$SendWorker@688] - Send worker leaving thread WARN [QuorumPeer[myid=4]/0.0.0.0:50512:Follower@89] - Exception when following the leader java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:189) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:108) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:740) INFO [QuorumPeer[myid=4]/0.0.0.0:50512:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:744) {code} After that all of the connections are shut down {code} INFO [QuorumPeer[myid=4]/0.0.0.0:50512:NIOServerCnxn@1001] - Closed socket connection for client ... {code} but it does not manage to have the JVM shut down {code} INFO [QuorumPeer[myid=4]/0.0.0.0:50512:FollowerZooKeeperServer@139] - Shutting down INFO [QuorumPeer[myid=4]/0.0.0.0:50512:ZooKeeperServer@419] - shutting down INFO [QuorumPeer[myid=4]/0.0.0.0:50512:FollowerRequestProcessor@105] - Shutting down INFO [QuorumPeer[myid=4]/0.0.0.0:50512:CommitProcessor@181] - Shutting down INFO [FollowerRequestProcessor:4:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop! INFO [QuorumPeer[myid=4]/0.0.0.0:50512:FinalRequestProcessor@415] - shutdown of request processor complete INFO [CommitProcessor:4:CommitProcessor@150] - CommitProcessor exited loop! WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:50512:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:50512:NIOServerCnxn@1001] - Closed socket connection for client /... (no session established for client) INFO [QuorumPeer[myid=4]/0.0.0.0:50512:SyncRequestProcessor@175] - Shutting down INFO [SyncThread:4:SyncRequestProcessor@155] - SyncRequestProcessor exited! INFO [QuorumPeer[myid=4]/0.0.0.0:50512:QuorumPeer@670] - LOOKING {code} after that all connections to that node initiate, and then are shut down with ZooKeeperServer not running. It seems to stay in this state indefinitely until the process is manually restarted. After that it recovers. We have seen this happen on multiple servers at the same time resulting in the entire ensemble being unusable. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 10 weeks, 2 days ago | 0|i24bev: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2105 | PrintWriter left unclosed in NIOServerCnxn#checkFourLetterWord |
Bug | Resolved | Minor | Not A Problem | Unassigned | Ted Yu | Ted Yu | 09/Jan/15 20:11 | 19/Jan/15 00:22 | 19/Jan/15 00:22 | 0 | 2 | {code} final PrintWriter pwriter = new PrintWriter( new BufferedWriter(new SendBufferWriter())); {code} pwriter should be closed upon return from the method. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 9 weeks, 3 days ago | 0|i247xz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2104 | Sudden crash of all nodes in the cluster |
Bug | Open | Major | Unresolved | Unassigned | Benjamin Jaton | Benjamin Jaton | 09/Jan/15 19:48 | 09/Dec/16 22:23 | 3.4.6 | server | 4 | 21 | In a 3 nodes ensemble, suddenly all the nodes seem to fail, displaying "ZooKeeper is not running" messages. Not retry seems to be happening after that. This a request to understand what happened and probably to improve the logs when it does. See logs below: NODE1: -- no log for several days before this -- 2015-01-04 16:18:22,259 [myid:1] - WARN [SyncThread:1:FileTxnLog@321] - fsync-ing the write ahead log in SyncThread:1 took 11024ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide 2015-01-04 16:18:22,380 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:103) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786) 2015-01-04 16:18:23,384 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2015-01-04 16:18:23,492 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2015-01-04 16:18:24,060 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running NODE2: -- no log for several days before this -- 2015-01-04 16:18:21,899 [myid:3] - WARN [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:103) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786) 2015-01-04 16:18:22,760 [myid:3] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2015-01-04 16:18:22,801 [myid:3] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2015-01-04 16:18:22,886 [myid:3] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running NODE3 (leader): -- no log for several days before this -- 2015-01-04 16:18:21,897 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:LearnerHandler@687] - Closing connection to peer due to transaction timeout. 2015-01-04 16:18:21,898 [myid:2] - WARN [LearnerHandler-/204.53.107.249:43402:LearnerHandler@646] - ******* GOODBYE /204.53.107.249:43402 ******** 2015-01-04 16:18:21,905 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:LearnerHandler@687] - Closing connection to peer due to transaction timeout. 2015-01-04 16:18:21,907 [myid:2] - WARN [LearnerHandler-/204.53.107.247:45953:LearnerHandler@646] - ******* GOODBYE /204.53.107.247:45953 ******** 2015-01-04 16:18:21,918 [myid:2] - WARN [LearnerHandler-/204.53.107.247:45953:LearnerHandler@658] - Ignoring unexpected exception java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) at org.apache.zookeeper.server.quorum.LearnerHandler.shutdown(LearnerHandler.java:656) at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:649) 2015-01-04 16:18:23,003 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2015-01-04 16:18:23,007 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running 2015-01-04 16:18:23,115 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 14 weeks, 5 days ago | 0|i247xb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2103 | ZooKeeper Client Configuration |
New Feature | Resolved | Minor | Duplicate | Chris Larsen | Chris Larsen | Chris Larsen | 07/Jan/15 15:55 | 08/Aug/16 10:42 | 08/Aug/16 10:42 | 3.6.0 | 3.5.2 | java client | 1 | 3 | ZOOKEEPER-2139 | All | I ran into an issue when connecting to two ZooKeeper clusters from the same JVM application. One of the clusters required SASL authentication while the other one did not. Unfortunately the client uses System properties to configure authentication and the client was attempting to authenticate on the non-auth cluster, preventing a connection. To solve it, I implemented a base config class with helper methods for parsing config settings as well as a client specific subclass that parsed the system system values but allowed for overriding via programatic values or via a file. There are also new Zookeeper constructors to use this config object. I implemented it so that it's completely backwards compatible so it shouldn't break existing installs (and it hasn't yet with my testing). If folks like this, we could use the same config base for server configs and migrate away from system properties to per object configs. It would also be helpful to centralize more of the "zookeeper.*" strings. Let me know what ya'll think and thanks! |
features, patch | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 32 weeks, 3 days ago | 0|i241qv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2102 | commitandactivate messages spamming the quorum |
Bug | Resolved | Major | Invalid | Unassigned | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 07/Jan/15 14:02 | 07/Jan/15 18:07 | 07/Jan/15 18:07 | server | 0 | 2 | Using zab-dump (https://github.com/twitter/zktraffic/pull/11), I am seeing this in a prod cluster running 3.5.0 + patches: {noformat} QuorumPacket( timestamp=18:45:35:962873, src=10.0.1.1:2889, type=commitandactivate, zxid=292104572694, length=114 ) QuorumPacket( timestamp=18:45:35:962876, src=10.0.1.1:2889, type=commitandactivate, zxid=292104572694, length=114 ) QuorumPacket( timestamp=18:45:35:962893, src=10.0.1.1:2889, type=commitandactivate, zxid=292104572694, length=114 ) .... {noformat} From a ~5min dump, I see ~80k QuorumPackets of which ~50k are commitandactivate packets! Sounds like some sort of loop. Any ideas [~shralex]? cc: [~hdeng], [~fpj] |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 11 weeks, 1 day ago | 0|i241mn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2101 | Transaction larger than max buffer of jute makes zookeeper unavailable |
Bug | Resolved | Major | Cannot Reproduce | Andor Molnar | Shaohui Liu | Shaohui Liu | 04/Jan/15 05:13 | 19/Dec/19 18:01 | 26/Mar/18 09:30 | 3.4.4 | 3.5.4 | jute | 0 | 15 | ZOOKEEPER-3496 | *Problem* For multi operation, PrepRequestProcessor may produce a large transaction whose size may be larger than the max buffer size of jute. There is check of buffer size in readBuffer method of BinaryInputArchive, but no check in writeBuffer method of BinaryOutputArchive, which will cause that 1, Leader can sync transaction to txn log and send the large transaction to the followers, but the followers failed to read the transaction and can't sync with leader. {code} 2015-01-04,12:42:26,474 WARN org.apache.zookeeper.server.quorum.Learner: [myid:2] Exception when following the leader java.io.IOException: Unreasonable length = 2054758 at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:100) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:85) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:108) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:740) 2015-01-04,12:42:26,475 INFO org.apache.zookeeper.server.quorum.Learner: [myid:2] shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:744) {code} 2, The leader lose all followers, which trigger the leader election. The old leader will become leader again for it has up-to-date data. {code} 2015-01-04,12:42:28,502 INFO org.apache.zookeeper.server.quorum.Leader: [myid:3] Shutting down 2015-01-04,12:42:28,502 INFO org.apache.zookeeper.server.quorum.Leader: [myid:3] Shutdown called java.lang.Exception: shutdown Leader! reason: Only 1 followers, need 2 at org.apache.zookeeper.server.quorum.Leader.shutdown(Leader.java:496) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:471) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:753) {code} 3, The leader can not load the transaction from the txn log for the length of data is larger than the max buffer of jute. {code} 2015-01-04,12:42:31,282 ERROR org.apache.zookeeper.server.quorum.QuorumPeer: [myid:3] Unable to load database on disk java.io.IOException: Unreasonable length = 2054758 at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:100) at org.apache.zookeeper.server.persistence.Util.readTxnBytes(Util.java:233) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:602) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:157) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:417) at org.apache.zookeeper.server.quorum.QuorumPeer.getLastLoggedZxid(QuorumPeer.java:546) at org.apache.zookeeper.server.quorum.FastLeaderElection.getInitLastLoggedZxid(FastLeaderElection.java:690) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:737) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:716) {code} The zookeeper service will be unavailable until we enlarge the jute.maxbuffer and restart zookeeper hbase cluster. *Solution* Add buffer size check in BinaryOutputArchive to avoid large transaction be written to log and sent to followers. But I am not sure if there are side-effects of throwing an IOException in BinaryOutputArchive and RequestProcessors |
9223372036854775807 | No Perforce job exists for this issue. | 9 | 9223372036854775807 | 1 year, 5 weeks, 1 day ago | 0|i23xkn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2100 | ZooKeeperSaslClient doesn't shut down Login thread |
Bug | Open | Major | Unresolved | Unassigned | Gregory Chanan | Gregory Chanan | 29/Dec/14 18:59 | 11/Jan/15 11:54 | 3.4.5 | java client | 0 | 3 | SOLR-6915 | I found this in 3.4.5, but a quick perusal of the code suggests this exists in later versions as well. Setup: I'm running some ZooKeeper SASL tests with Hadoop's MiniKDC under Solr's test framework, which checks for things like thread leaks. The thread leak checker is complaining about the Login thread, which is created but never shut down. It's started here: https://github.com/apache/zookeeper/blob/6ebd23b32d2cf606e01906bee4460bf79eb7f3fa/src/java/main/org/apache/zookeeper/client/ZooKeeperSaslClient.java#L227 and you can verify via reading the code that it is never shut down. This may be intentional, because the Login object is static, so it is probably supposed to stick around for the lifetime of the application. This is not great for a test setup, where the idea is that a cluster and all associated clients are started/stopped for each test suite. You wouldn't want either: 1) a thread stick around doing nothing, or 2) sticking around doing something (because it makes the first suite that happens to run behave differently than subsequents suites). in addition, this only happens with SASL clients, so we'd want to only turn off the leak checker if we are running under SASL (so we don't miss other leaked threads), which is a bit more complexity than I would like. I'd be happy with a function I could call to say "I'm really done, close down everything, even in the Login thread" or some automatic way of doing it. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 12 weeks, 3 days ago | 0|i23ty7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2099 | Using txnlog to sync a learner can corrupt the learner's datatree |
Bug | Patch Available | Major | Unresolved | Martin Kuchta | Santeri (Santtu) Voutilainen | Santeri (Santtu) Voutilainen | 29/Dec/14 16:35 | 25/Oct/16 16:47 | 3.5.0, 3.6.0 | server | 0 | 7 | When a learner sync's with the leader, it is possible for the Leader to send the learner a DIFF that does NOT contain all the transactions between the learner's zxid and that of the leader's zxid thus resulting in a corruption datatree on the learner. For this to occur, the leader must have sync'd with a previous leader using a SNAP and the zxid requested by the learner must still exist in the current leader's txnlog files. This issue was introduced by ZOOKEEPER-1413. *Scenario* A sample sequence in which this issue occurs: # Hosts H1 and H2 disconnect from the current leader H3 (crash, network partition, etc). The last zxid on these hosts is Z1. # Additional transactions occur on the cluster resulting in the latest zxid being Z2. # Host H1 recovers and connects to H3 to sync and sends Z1 as part of its FOLLOWERINFO or OBSERVERINFO packet. # The leader, H3, decides to send a SNAP because a) it does not have the necessary records in the in-mem committed log, AND b) the size of the required txnlog to send it larger than the limit. # Host H1 successfully sync's with the leader (H3). At this point H1's txnlogs have records up to and including Z1 as well as Z2 and up. It does NOT have records between Z1 and Z2. # Host H3 fails; a leader election occurs and H1 is chosen as the leader # Host H2 recovers and connects to H1 to sync and sends Z1 in its FOLLOWERINFO/OBSERVERINFO packet # The leader, H1, determines it can send a DIFF. It concludes this because although it does not have the necessary records in its in-memory commit log, it does have Z1 in its txnlog and the size of the log is less than the limit. H1 ends up with a different size calculation than H3 because H1 is missing all the records between Z1 and Z2 so it has less log to send. # H2 receives the DIFF and applies the records to its data tree. Depending on the type of transactions that occurred between Z1 and Z2 it may not hit any errors when applying these records. H2 now has a corrupted view of the data tree because it is missing all the changes made by the transactions between Z1 and Z2. *Recovery* The way to recover from this situation is to delete the data/snap directory contents from the affected hosts and have them resync with the leader at which point they will receive a SNAP since they will appear as empty hosts. *Workaround* A quick workaround for anyone concerned about this issue is to disable sync from the txnlog by changing the database size limit to 0. This is a code change as it is not a configurable setting. *Potential fixes* There are several ways of fixing this. A few of options: * Delete all snaps and txnlog files on a host when it receives a SNAP from the leader * Invalidate sync from txnlog after receiving a SNAP. This state must also be persisted on-disk so that the txnlogs with the gap cannot be used to provide a DIFF even after restart. A couple ways in which the state could be persisted: ** Write a file (for example: loggap.<zxid>) in the data dir indicating that the host was sync'd with a SNAP and thus txnlogs might be missing. Presence of these files would be checked when reading txnlogs. ** Write a new record into the txnlog file as "sync'd-by-snap-from-leader" marker. Readers of the txnlog would then check for presence of this record when iterating through it and act appropriately. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 21 weeks, 2 days ago | 0|i23tuf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2098 | QuorumCnxManager: use BufferedOutputStream for initial msg |
Improvement | Resolved | Major | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 29/Dec/14 15:34 | 03/Jun/15 12:46 | 29/May/15 17:46 | 3.5.0 | 3.5.1, 3.6.0 | quorum, server | 0 | 6 | ZOOKEEPER-2189 | Whilst writing fle-dump (a tool like [zk-dump|https://github.com/twitter/zktraffic/], but to dump FastLeaderElection messages), I noticed that QCM is using DataOutputStream (which doesn't buffer) directly. So all calls to write() are written immediately to the network, which means simple messaages like two participants exchanging Votes can take a couple RTTs! This is specially terrible for global clusters (i.e.: x-country RTTs). The solution is to use BufferedOutputStream for the initial negotiation between members of the cluster. Note that there are other places were suboptimal (but not entirely unbuffered) writes to the network still exist. I'll get those in separate tickets. After using BufferedOutputStream we get only 1 RTT for the initial message, so elections & time for for participants to join a cluster is reduced. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 42 weeks, 5 days ago | 0|i23ttr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2097 | Clarify security requirement for Exists request |
Task | Patch Available | Minor | Unresolved | Ian Dimayuga | Ian Dimayuga | Ian Dimayuga | 19/Dec/14 17:30 | 05/Feb/20 07:12 | 3.4.6, 3.5.0 | 3.7.0, 3.5.8 | 0 | 1 | According to the [Programmer's Guide|http://zookeeper.apache.org/doc/current/zookeeperProgrammers.html]: bq. Everyone implicitly has LOOKUP permission. This allows you to stat a node, but nothing more. (The problem is, if you want to call zoo_exists() on a node that doesn't exist, there is no permission to check.) This implies that Exists has no security requirement, so the existing comment in FinalRequestProcessor {code}// TODO we need to figure out the security requirement for this!{code} can be removed. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 1 year, 45 weeks ago | 0|i23ndj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2096 | C client builds with incorrect error codes in VisualStudio 2010+ |
Bug | Resolved | Major | Fixed | Vitaly Stakhovsky | Vitaly Stakhovsky | Vitaly Stakhovsky | 19/Dec/14 12:41 | 02/Mar/16 20:30 | 02/Jun/15 16:44 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | build, c client | 0 | 5 | 3600 | 3600 | 0% | Windows MSVS 2010+ | It reports: warning C4005: 'EWOULDBLOCK' : macro redefinition warning C4005: 'EINPROGRESS' : macro redefinition In MSVS 2010+, these constants are now in <errno.h>. What's worse, they have different numeric values. Possible fix: In "src/c/include/winconfig.h" : #if _MSC_VER < 1600 #define EWOULDBLOCK WSAEWOULDBLOCK #define EINPROGRESS WSAEINPROGRESS #endif |
0% | 0% | 3600 | 3600 | 9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 4 years, 42 weeks, 1 day ago | 0|i23mrz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2095 | Add Systemd startup/conf files |
Improvement | Resolved | Minor | Won't Fix | Guillaume ALAUX | Guillaume ALAUX | Guillaume ALAUX | 18/Dec/14 16:58 | 03/Mar/16 19:32 | 03/Mar/16 19:32 | contrib | 1 | 3 | ZOOKEEPER-1604 | As adoption of systemd by distributions grows, it would be nice to have systemd configuration and startup files for Zookeeper in the upstream tree. I would thus like to contribute the following patch which brings the followings systemd files: - {{sysusers.d_zookeeper.conf}}: creates {{zookeeper}} Linux system user to run Zookeeper - {{tmpfiles.d_zookeeper.conf}}: creates temporary {{/var/log/zookeeper}} and {{/var/lib/zookeeper} directories - {{zookeeper.service}}: regular systemd startup _script_ - {{zookeeper@.service}}: systemd startup _script_ for specific use (for instance when Zookeeper is invoked to support some other piece of software – [example for Kafka|http://pkgbuild.com/git/aur-mirror.git/tree/kafka/systemd_kafka.service#n3], [example for Storm|http://pkgbuild.com/git/aur-mirror.git/tree/storm/systemd_storm-nimbus.service#n3]) |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i23lfj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2094 | ZOOKEEPER-2063 SSL feature on Netty |
Sub-task | Closed | Major | Duplicate | Ian Dimayuga | Ian Dimayuga | Ian Dimayuga | 05/Dec/14 19:59 | 19/Dec/19 18:02 | 13/May/15 13:08 | 3.4.6, 3.5.0 | 3.5.2 | server | 0 | 7 | ZOOKEEPER-2120 | Add SSL handler to Netty pipeline, and a default X509AuthenticationProvider to perform authentication. Review board: https://reviews.apache.org/r/30753/diff/# |
9223372036854775807 | No Perforce job exists for this issue. | 17 | 9223372036854775807 | 4 years, 45 weeks, 1 day ago | 0|i2354f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2093 | Expose cumulative latency and request count |
Improvement | Patch Available | Minor | Unresolved | Brian Brazil | Brian Brazil | Brian Brazil | 01/Dec/14 13:05 | 14/Dec/19 06:06 | 3.7.0 | jmx, server | 2 | 4 | Currently Zookeeper exposes min, max and average since server start request latency. It'd also be useful to be able to calculate latency over a given time period. This patch exposes the total latency, and count of number of request. By tracking how these values increase over time, your monitoring system can calculate the average latency over a given period of time. This only provides milliseconds, as that's what's currently there. I'm seeing a test failure on ClientPortBindTest.testBindByAddress, this appears to be due to my IPv6 setup. |
patch | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago | Expose total request count and latency |
Incompatible change
|
0|i22xlb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2092 | A zk instance can not be connected for ZooKeeperServer is not running |
Bug | Open | Major | Unresolved | Unassigned | Shaohui Liu | Shaohui Liu | 27/Nov/14 08:03 | 28/Nov/14 06:29 | 3.4.4 | 0 | 3 | In our 5 node zk cluster, we found a zk node always can not be connected. From the stack we found the ZooKeeperServer hung at waiting the server to be running. But the node is running normally and synced with the leader. {code} $ ./zkCli.sh -server 10.101.10.67:11000 ls / 2014-11-27 20:57:11,843 [myid:] - WARN [main-SendThread(lg-com-master02.bj:11000):ClientCnxn$SendThread@1089] - Session 0x0 for server lg-com-master02.bj/10.101.10.67:11000, unexpected error, closing socket connection and attempting reconnect java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:68) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:353) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068) Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for / at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1469) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1497) at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:726) at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:594) at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:355) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:283) {code} ZooKeeperServer stack {code} "NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11000" daemon prio=10 tid=0x00007f60143f7800 nid=0x31fd in Object.wait() [0x00007f5fd4678000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) at org.apache.zookeeper.server.ZooKeeperServer.submitRequest(ZooKeeperServer.java:634) - locked <0x00000007602756a0> (a org.apache.zookeeper.server.quorum.FollowerZooKeeperServer) at org.apache.zookeeper.server.ZooKeeperServer.submitRequest(ZooKeeperServer.java:626) at org.apache.zookeeper.server.ZooKeeperServer.createSession(ZooKeeperServer.java:525) at org.apache.zookeeper.server.ZooKeeperServer.processConnectRequest(ZooKeeperServer.java:841) at org.apache.zookeeper.server.NIOServerCnxn.readConnectRequest(NIOServerCnxn.java:410) at org.apache.zookeeper.server.NIOServerCnxn.readPayload(NIOServerCnxn.java:200) at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:236) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) at java.lang.Thread.run(Thread.java:662) {code} Any suggestions about this problem? Thanks. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 16 weeks, 6 days ago | 0|i22ulj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2091 | Possible logic error in ClientCnxnSocketNIO |
Bug | Patch Available | Major | Unresolved | Rakesh Radhakrishnan | Cheng | Cheng | 25/Nov/14 20:06 | 05/Feb/20 07:12 | 3.4.6 | 3.7.0, 3.5.8 | java client | 1 | 5 | When SASL authentication is enabled, the ZooKeeper client will finally call ClientCnxnSocketNIO#sendPacket(Packet p) to send a packet to server: @Override void sendPacket(Packet p) throws IOException { SocketChannel sock = (SocketChannel) sockKey.channel(); if (sock == null) { throw new IOException("Socket is null!"); } p.createBB(); ByteBuffer pbb = p.bb; sock.write(pbb); } One problem I can see is that the sock is non-blocking, so when the sock's output buffer is full(theoretically), only part of the Packet is sent out and the communication will break. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 3 years, 39 weeks, 2 days ago | 0|i22si7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2090 | Fix Zookeeper docs "To be done" notices |
Bug | Open | Trivial | Unresolved | Unassigned | Hector | Hector | 25/Nov/14 09:54 | 13/Sep/16 14:32 | documentation | 1 | 2 | 86400 | 86400 | 0% | ZOOKEEPER-815, ZOOKEEPER-671, ZOOKEEPER-2431 | Official website | The docs on the website are full of TBDs since a long time ago. While there is not a full lack of docs, and you can get going with what there is, the main general-purpose entry points are not too polished and they give the impression ZK is not very well maintained to newcomers and anyone who just wants to see how ZK is progressing and refresh concepts. The ZK overview doc (http://zookeeper.apache.org/doc/trunk/zookeeperOver.html) is supposed to be a first entry point for new Zookeeper users and it is full of _\[tbd\]s_: {quote} When the session ends the znode is deleted. Ephemeral nodes are useful when you want to implement \[tbd\]. {quote} {quote} And if the connection between the client and one of the Zoo Keeper servers is broken, the client will receive a local notification. These can be used to \[tbd\]. {quote} {quote} Timeliness - The clients view of the system is guaranteed to be up-to-date within a certain time bound. For more information on these, and how they can be used, see \[tbd\] {quote} {quote} For a more in-depth discussion on these, and how they can be used to implement higher level operations, please refer to \[tbd\] {quote} {quote} Some distributed applications have used it to: \[tbd: add uses from white paper and video presentation.\] For more information, see \[tbd\] {quote} {quote} These znodes exists as long as the session that created the znode is active. When the session ends the znode is deleted. Ephemeral nodes are useful when you want to implement \[tbd\]. {quote} The second entry point, "Getting Started" (http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html) {quote} \[tbd: what is the other config param?\] {quote} Programmers guide (http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html) {quote} "If the version it supplies doesn't match the actual version of the data, the update will fail. (This behavior can be overridden. For more information see... )\[tbd...\]" {quote} {quote} Connecting to ZooKeeper Read Operations Write Operations Handling Watches Miscelleaneous ZooKeeper Operations Program Structure, with Simple Example \[tbd\] {quote} {quote} ZooKeeper Whitepaper \[tbd: find url\] The definitive discussion of ZooKeeper design and performance, by Yahoo! Research API Reference \[tbd: find url\] The complete reference to the ZooKeeper API {quote} Administration guide (http://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html) {quote} Provisioning Things to Consider: ZooKeeper Strengths and Limitations Administering {quote} {quote} TBD - tuning options for netty - currently there are none that are netty specific but we should add some. Esp around max bound on the number of reader worker threads netty creates. TBD - how to manage encryption TBD - how to manage certificates {quote} Since it is not a big deal to fix these, I think it is worth it to spend some hours doing it. |
0% | 0% | 86400 | 86400 | documentation | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 17 weeks, 2 days ago | 0|i22rlr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2089 | Improve the building blocks part of the programmer's guide |
Improvement | Open | Major | Unresolved | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 25/Nov/14 09:46 | 05/Feb/20 07:16 | 3.4.6, 3.5.0 | 3.7.0, 3.5.8 | documentation | 0 | 1 | This part of the documentation has been incomplete since 3.1.2. The main reason seems to be that the information that was supposed to be there exists elsewhere in a different form. It needs a revision nonetheless. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 17 weeks, 2 days ago | 0|i22rlb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2088 | Provide custom logging hook |
Wish | Open | Trivial | Unresolved | Jean-François SMIGIELSKI | Jean-François SMIGIELSKI | Jean-François SMIGIELSKI | 24/Nov/14 08:49 | 05/Feb/20 07:16 | 3.4.6 | 3.7.0, 3.5.8 | c client | 0 | 1 | Linux | Hello, proud Zookeeper maintainers! I incoporated the zookeeper C client API in a quite large code base. The result is fine, everything works as expected but the logs. Our code already manages its log traces via one API, the GLib-2.0 logging features. The current "FILE*" based logging of the C client is not suitable for us. I propose to integrate a minimal change that would allows me to plug ZK's output on any other : a simple logging hook. I pasted below the patch I propose. What do you think about ? diff -r -U3 zookeeper-3.4.6/src/c/include/zookeeper_log.h zookeeper-3.4.6-new/src/c/include/zookeeper_log.h --- zookeeper-3.4.6/src/c/include/zookeeper_log.h 2014-02-20 11:14:08.000000000 +0100 +++ zookeeper-3.4.6-new/src/c/include/zookeeper_log.h 2014-11-24 13:36:21.088124921 +0100 @@ -44,6 +44,10 @@ FILE* getLogStream(); +typedef void (zk_hook_log) (ZooLogLevel, int, const char*, const char *); + +void zoo_set_log_hook (zk_hook_log *hook); + #ifdef __cplusplus } #endif diff -r -U3 zookeeper-3.4.6/src/c/src/zk_log.c zookeeper-3.4.6-new/src/c/src/zk_log.c --- zookeeper-3.4.6/src/c/src/zk_log.c 2014-02-20 11:14:09.000000000 +0100 +++ zookeeper-3.4.6-new/src/c/src/zk_log.c 2014-11-24 14:28:46.151503385 +0100 @@ -122,9 +122,17 @@ return now_str; } +static zk_hook_log *log_hook = NULL; +void zoo_set_log_hook (zk_hook_log *hook) +{ + log_hook = hook; +} + void log_message(ZooLogLevel curLevel,int line,const char* funcName, const char* message) { + if (log_hook) return (*log_hook)(curLevel, line, funcName, message); + static const char* dbgLevelStr[]={"ZOO_INVALID","ZOO_ERROR","ZOO_WARN", "ZOO_INFO","ZOO_DEBUG"}; static pid_t pid=0; |
newbie | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 17 weeks, 3 days ago | 0|i22prb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2087 | Few UX improvements in ZooInspector |
Improvement | Closed | Minor | Fixed | Adam Dudczak | Adam Dudczak | Adam Dudczak | 21/Nov/14 18:28 | 21/Jul/16 16:18 | 09/Mar/16 23:29 | 3.4.6 | 3.5.2, 3.6.0 | contrib | 0 | 2 | A few simple changes would simplify using ZooInspector a lot. - Alphabetical order of nodes on a tree view - Short term caching of zookeeper nodes for faster rendering of node tree - Add/Delete node in context menu of a node - Keyboard shortcuts for add/deleting a node - Logging information that ZooInspector failed to load nodeViewers |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 2 weeks ago |
Reviewed
|
0|i22ob3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2086 | Unnecessary error log when defaultWatcher is not set for ZooKeeper client |
Bug | Patch Available | Minor | Unresolved | Saurabh Chhajed | Cheng | Cheng | 21/Nov/14 03:37 | 16/Dec/14 15:50 | 3.4.6 | java client | 1 | 4 | In org.apache.zookeeper.ZooKeeper.ZKWatchManager#materialize(), even if the defaultWatcher is null, it is still be added into the Set and returned. This would cause a lot of annoying error log at org.apache.zookeeper.ClientCnxn.EventThread#processEvent as below: 2014-11-21 15:21:23,279 - ERROR - [main-EventThread:ClientCnxn$EventThread@524] - Error while calling watcher java.lang.NullPointerException at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) It can be simply fixed by having a null check in ZKWatchManager. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 14 weeks, 2 days ago | 0|i22n5b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2085 | Upgrade to jline2 |
Improvement | Resolved | Major | Duplicate | Unassigned | Brock Noland | Brock Noland | 18/Nov/14 19:54 | 18/Nov/14 19:56 | 18/Nov/14 19:56 | 0 | 1 | HIVE-8565, YARN-2815, ZOOKEEPER-1718, HIVE-8609 | Hive has upgraded to jline2 in HIVE-8609 due to a serious bug we found in HIVE-8565. It'd be great if Zookeeper could upgrade as well. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 18 weeks, 1 day ago | 0|i22j93: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2084 | Document local session parameters |
Improvement | Closed | Major | Resolved | Unassigned | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 15/Nov/14 11:04 | 14/Feb/20 10:23 | 10/Oct/19 23:37 | 3.5.0 | 3.6.0, 3.5.7 | documentation | 1 | 3 | ZOOKEEPER-3400 | Document the options introduced in ZOOKEEPER-1147. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 22 weeks, 6 days ago | 0|i22fb3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2083 | Remove deprecated LE implementations |
Improvement | Resolved | Major | Fixed | Enrico Olivelli | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 15/Nov/14 05:38 | 09/Jan/20 05:29 | 09/Jan/20 05:27 | 3.6.0 | 0 | 2 | 0 | 12600 | As per ZOOKEEPER-1153, we should remove implementations 0, 1, 2. | 100% | 100% | 12600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 10 weeks ago | 0|i22f73: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2082 | Mistype of electionAlgo can fill out your disk in minutes |
Bug | Resolved | Minor | Fixed | Unassigned | Tianyin Xu | Tianyin Xu | 14/Nov/14 17:42 | 21/Oct/18 03:28 | 21/Oct/18 03:28 | 3.4.6 | leaderElection | 0 | 6 | Cluster (multi-server) setup | The parameter, electionAlgo, is supposed to be 0--3. However, when I mistyped the value in my zoo.cfg (I'm stupid), ZK falls into a dead loop and starts filling up the entire disk which millions of the follow 2 lines... 2014-11-14 14:28:44,588 \[myid:3\] - INFO \[QuorumPeer\[myid=3\]/0:0:0:0:0:0:0:0:2183:QuorumPeer@714\] - LOOKING 2014-11-14 14:28:44,588 \[myid:3\] - WARN \[QuorumPeer\[myid=3\]/0:0:0:0:0:0:0:0:2183:QuorumPeer@764\] - Unexpected exception java.lang.NullPointerException at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:762) The error rooted in createElectionAlgorithm() where an invalid setting leads to null for the Election object. Then, in the while look in run(), it causes null-pointer de-referencing which is captured but is not handled well. I think our should check the setting of electionAlg in the very beginning to make sure it's a valid setting, instead of using it at runtime and cause the unfortunate things. Let me know if you wanna a patch. I'd like to check it in the parseProperties() function in QuorumPeerConfig.java. Thanks! |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 29 weeks, 6 days ago | 0|i22enz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2081 | Leader election cannot complete when a node is blackholed (unreachable) even when quorum is possible. |
Bug | Open | Major | Unresolved | Unassigned | Matan | Matan | 13/Nov/14 20:27 | 29/Oct/15 14:33 | 3.3.6, 3.4.6 | leaderElection, quorum | 0 | 13 | Verified on RHEL and Mac OS X. | I noticed a situation when one of our 3-node clusters on RHEL lost a machine due to PSU failure. The remaining two nodes failed to complete leader election and would continually restart the leader election process. Restarting the nodes would not help and they would reach the same exact state. This was curious so I spent some time and managed to reproduce this on my local machine and found what looks like the main factor: When a node is unreachable (timeouts), this somehow causes the election process to get out of sync. Once a leader is decided, the follower tries to connect to the leader only when the leader is not listening. Then the follower gives up and the process starts again ad infinitum. How to reproduce on a local machine: 1. Setup up a 3 node cluster of ZK. Note we only need to set up 2 boxes since we'll just make the third unreachable: MyId 1: server.1=MyMachine:2881:3881 server.2=<Put any IP that we can block>:2882:3882 server.3=MyMachine:2883:3883 MyId 3: server.1=MyMachine:2881:3881 server.2=<Put any IP that we can block>:2882:3882 server.3=MyMachine:2883:3883 Now set up a blackhole route for the IP you choose (Mac OSX, Linux is similar): > route add -host <IP you selected> 127.0.0.1 -blackhole Start your 2 nodes. They will never reach quorum. However, if I remove the blackhole route and just not start the 3rd instance (but the host is still reachable), it will work fine and quorum will be reached almost immediately. It seems the difference between the “timeout” and a "connection refused” makes all the difference somehow in the election process. I verified this behavior on 3.4.6 and 3.3.6. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 21 weeks ago | 0|i22cz3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2080 | ZOOKEEPER-2135 Fix deadlock in dynamic reconfiguration |
Sub-task | Closed | Major | Fixed | Michael Han | Ted Yu | Ted Yu | 12/Nov/14 18:54 | 17/May/17 23:43 | 09/Feb/17 00:10 | 3.5.2 | 3.5.3, 3.6.0 | server | 0 | 11 | ZOOKEEPER-2246, ZOOKEEPER-1807, ZOOKEEPER-2164, ZOOKEEPER-2778, ZOOKEEPER-1806, ZOOKEEPER-901 | I got the following test failure on MacBook with trunk code: {code} Testcase: testCurrentObserverIsParticipantInNewConfig took 93.628 sec FAILED waiting for server 2 being up junit.framework.AssertionFailedError: waiting for server 2 being up at org.apache.zookeeper.server.quorum.ReconfigRecoveryTest.testCurrentObserverIsParticipantInNewConfig(ReconfigRecoveryTest.java:529) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) {code} |
9223372036854775807 | No Perforce job exists for this issue. | 9 | 9223372036854775807 | 2 years, 45 weeks, 5 days ago | 0|i22b33: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2079 | Stop daemon with "kill" rather than "kill -9" |
Improvement | Resolved | Minor | Fixed | Guillaume ALAUX | Guillaume ALAUX | Guillaume ALAUX | 12/Nov/14 12:08 | 12/Oct/17 04:45 | 17/Nov/14 01:50 | 3.5.1, 3.6.0 | scripts | 0 | 4 | *nix | Script `zkServer.sh` stops zookeeper by sending the java process a `kill -9` (SIGKILL). As there seems to be no technical reasons to use such a radical signal rather than the default SIGTERM (-15), I would propose to just use `kill` rather than `kill -9`. My use case is for Systemd service files for Zookeeper which always consider Zookeeper java process as failing when a clean `stop` is issued. Systemd output showing this "fail": ---------------8<--------------- # sudo systemctl status zookeeper.service ● zookeeper.service - Highly reliable distributed coordination server Loaded: loaded (/usr/lib/systemd/system/zookeeper.service; disabled) Active: failed (Result: signal) since Wed 2014-11-05 11:23:29 CET; 2s ago Process: 656 ExecStop=/usr/bin/zkServer.sh stop (code=exited, status=0/SUCCESS) Process: 406 ExecStart=/usr/bin/zkServer.sh start (code=exited, status=0/SUCCESS) Main PID: 414 (code=killed, signal=KILL) Nov 05 11:23:29 magenta zookeeper[656]: Stopping zookeeper ... STOPPED Nov 05 11:23:29 magenta systemd[1]: zookeeper.service: main process exited, code=killed, status=9/KILL Nov 05 11:23:29 magenta systemd[1]: Unit zookeeper.service entered failed state. ---------------8<--------------- There is no way to make this `status=9/KILL` to be recognized by Systemd as a regular exit code, even with `SuccessExitStatus=9 KILL SIGKILL`. On the other hand, turning this `kill -9` into a regular `kill` (-15 implied) makes it: ---------------8<--------------- # sudo systemctl status zookeeper.service ● zookeeper.service - Highly reliable distributed coordination server Loaded: loaded (/usr/lib/systemd/system/zookeeper.service; disabled) Active: inactive (dead) Nov 05 11:14:27 magenta zookeeper[30032]: Using config: /usr/share/zookeeper/bin/../conf/zoo.cfg Nov 05 11:14:27 magenta zookeeper[30032]: Stopping zookeeper ... STOPPED ---------------8<--------------- |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 23 weeks ago | Kill java process with `SIGTERM` rather than `SIGKILL` | 0|i22afz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2078 | zkServer.sh uses pattern unsupported by "grep" on Solaris |
Bug | Resolved | Minor | Duplicate | Chris Nauroth | metatech | metatech | 12/Nov/14 07:09 | 01/May/15 01:57 | 01/May/15 01:57 | 3.4.5 | scripts | 0 | 2 | ZOOKEEPER-1927 | Solaris 11 | The script "zkServer.sh" contains a pattern (POSIX "character class syntax") which is not supported by "grep" on Solaris (both versions 10 and 11). {code} ZOO_DATADIR="$(grep "^[[:space:]]*dataDir" "$ZOOCFG" | sed -e 's/.*=//')" {code} This results into the environment variable being set with an empty value, which later gives the following error : {code} Starting zookeeper ... bin/zkServer.sh: line 114: /zookeeper_server.pid: Permission denied {code} The workaround is to simplify the pattern used by "grep" : {code} ZOO_DATADIR="$(grep "^dataDir" "$ZOOCFG" | sed -e 's/.*=//')" {code} The same pattern is also used in the "status" command, which fails to read the "clientPort", which results into the following error : {code} Error contacting service. It is probably not running. {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 46 weeks, 6 days ago | 0|i22a1r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2077 | Wild-card/Regex Support for Zookeeper client commands |
New Feature | Open | Major | Unresolved | Unassigned | Vivek Madani | Vivek Madani | 12/Nov/14 03:44 | 17/Nov/15 01:14 | java client | 2 | 5 | We had an use-case where we had to list nodes matching a particular pattern from a given path. While looking at the ZK client commands, it seems that it does not support wildcard/regex. I did try to overcome this by making some basic changes to the LSCommand.java and adding a "-m" switch which accepts regex. Since I implemented this using java.util.regex, it supports everything that Java regex supports. I was thinking such functionality can be useful for 'ls' as well as 'delete' (and deleteall). Though I implemented this at the client code for ls - this can be done at the server side code as well and I have a preliminary plan on top of my head to do this for ls, delete, deleteall. Will it be worthwhile addition to make to zookeeper client? If so, I can work on submitting a patch. Points to consider in case such a support can be implemented: 1. Do we support Java regex or Unix Shell wildcards ( * )? 2. Right now, create allows creating nodes with characters like * - we need to make sure that such a change does not break or create confusion (Unix too allows creating a directory with * BTW). Any thoughts on whether this will be a worthwhile addition to Zookeeper client? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 18 weeks, 2 days ago | 0|i229u7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2076 | Improve Leader Change Mechanism |
Improvement | Open | Major | Unresolved | Atri Sharma | Alexander Shraer | Alexander Shraer | 10/Nov/14 18:59 | 31/Mar/17 13:46 | 3.5.0 | server | 0 | 5 | When a leader is removed during a reconfiguration, ZOOKEEPER-107 uses a mechanism where the old leader nominates the new one. Although it reduces the time for a new leader to be elected, it still takes too long. This JIRA is for two things: 1. Improve the mechanism, e.g., avoid loading snapshots, etc. during the handoff. 2. Make it a first-class citizen & export it as a client API. We get questions about this once in a while - how do I cause a different leader to be elected ? Currently the response is either kill or reconfigure the current leader. Any one interested to work on this ? |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 2 years, 50 weeks, 6 days ago | 0|i227if: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2075 | Utils.bufEquals |
Bug | Open | Major | Unresolved | Unassigned | Charlie Helin | Charlie Helin | 07/Nov/14 15:22 | 07/Nov/14 15:22 | 3.4.6 | java client | 0 | 1 | Just happened to notice Utils.bufEquals(byte[], byte[]) as a rather large outlier (~7% CPU time) when running with an attached profiler. By just simply switching the implementation to delegate directly to Arrays.equals(byte[], byte[]) the invocation disappears from the profile. The reason for this is that this is one of the methods which the JIT (not the interpreter) will generate an intrinsic for, using the builtin support of the CPU to do the check. The fix is trivial {code} public static boolean bufEquals(byte onearray[], byte twoarray[] ) { return Arrays.equals(onearray, twoarray); } {code} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 19 weeks, 6 days ago | 0|i224gf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2074 | Incorrect exit codes for "./zkCli.sh cmd arg" |
Bug | Closed | Minor | Fixed | Abraham Fine | Surendra Singh Lilhore | Surendra Singh Lilhore | 07/Nov/14 04:20 | 17/May/17 23:44 | 10/Aug/16 13:33 | 3.5.0 | 3.5.3, 3.6.0 | 0 | 10 | ZOOKEEPER-1898 | Linux@hghoulaslx406:/> $ZOOKEEPER_HOME/bin/zkCli.sh create /test "test" Created /test1 Linux@hghoulaslx406:/> echo $? 0 Linux@hghoulaslx406:/> $ZOOKEEPER_HOME/bin/zkCli.sh create /test "test" Node already exists: /test1 Linux@hghoulaslx406:/> echo $? 0 Linux@hghoulaslx406:/> $ZOOKEEPER_HOME/bin/zkCli.sh delete /test Linux@hghoulaslx406:/> echo $? 0 Linux@hghoulaslx406:/> $ZOOKEEPER_HOME/bin/zkCli.sh delete /test Node does not exist: /test1 Linux@hghoulaslx406:/> echo $? 0 Here for failed command it should return exit code 1 |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 3 years, 32 weeks, 1 day ago |
Reviewed
|
0|i223lz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2073 | Memory leak on zookeeper_close |
Bug | Resolved | Critical | Fixed | Dave Gosselin | Dave Gosselin | Dave Gosselin | 04/Nov/14 12:10 | 22/Feb/15 17:52 | 22/Feb/15 17:09 | 3.6.0 | 3.4.7, 3.5.1, 3.6.0 | c client | 0 | 5 | When running valgrind against a zookeeper client using the C API, I noticed an occasional memory leak on zookeeper_close. I traced the issue to a regression added by fix ZOOKEEPER-804. The attached patch fixes the regression and the associated memory leak. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 4 weeks, 4 days ago | 0|i21y9j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2072 | Netty Server Should Configure Child Channel Pipeline By Specifying ChannelPipelineFactory |
Bug | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 28/Oct/14 18:19 | 30/Jan/15 06:13 | 29/Jan/15 21:26 | 3.5.1, 3.6.0 | server | 0 | 6 | ZOOKEEPER-2063 | Currently, netty server is setting up child channel in this way: {code} bootstrap.getPipeline().addLast("servercnxnfactory", channelHandler); {code} According to the [netty doc|http://netty.io/3.9/api/org/jboss/netty/bootstrap/ServerBootstrap.html], bq. you cannot use this approach if you are going to open more than one Channels or run a server that accepts incoming connections to create its child channels. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 5 years, 7 weeks, 6 days ago | 0|i21p3b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2071 | Update docs for Apache branding and trademark requirements |
Bug | Open | Major | Unresolved | Wendy Smoak | Wendy Smoak | Wendy Smoak | 26/Oct/14 19:38 | 08/Nov/14 06:18 | 3.5.0 | 0 | 2 | The documentation needs to follow the Apache branding and trademark requirements: http://www.apache.org/foundation/marks/pmcs.html |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 5 years, 19 weeks, 5 days ago |
Incompatible change
|
0|i21le7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2070 | Explain how to update and publish the project website |
Task | Open | Minor | Unresolved | Unassigned | Wendy Smoak | Wendy Smoak | 26/Oct/14 19:30 | 26/Oct/14 19:34 | 0 | 1 | Need to document how the different parts of the project website get updated and published, both for committers, and for potential contributors who do not have write access. | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 21 weeks, 4 days ago | 0|i21ldz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2069 | ZOOKEEPER-2063 Netty Support for ClientCnxnSocket |
Sub-task | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 26/Oct/14 12:58 | 29/Nov/16 00:49 | 20/Dec/14 09:37 | 3.5.1, 3.6.0 | 0 | 7 | Review Board: https://reviews.apache.org/r/27244/diff/# | 9223372036854775807 | No Perforce job exists for this issue. | 21 | 9223372036854775807 | 5 years, 13 weeks, 4 days ago | 0|i21l7j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2068 | ServerCnxnTest.testServerCnxnExpiry failed when using Netty server option |
Bug | Open | Major | Unresolved | Unassigned | Hongchao Deng | Hongchao Deng | 25/Oct/14 23:38 | 01/Nov/14 11:57 | 0 | 1 | ZOOKEEPER-2063 | If using Netty server (setting "zookeeper.serverCnxnFactory" to "NettyServerCnxnFactory"), ServerCnxnTest.testServerCnxnExpiry always failed | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 21 weeks, 4 days ago | 0|i21kzb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2067 | Init script fails to track PID file when using a non-standard dataDir |
Bug | Patch Available | Minor | Unresolved | Jeremy Carroll | Jeremy Carroll | Jeremy Carroll | 24/Oct/14 21:40 | 03/Mar/16 01:14 | 3.4.6, 3.5.0 | 1 | 2 | When setting a dataDir in zoo.cfg that does not match /var/lib/zookeeper, the supplied init.d script failed to track the PID file. This change moves the logic that is present in zkServer.sh to determine the PID location into zkEnv.sh. Also removed the hard coded path for the zookeeper dataDir. | debian | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago | packaging | 0|i21kkv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2066 | Updates to README.txt |
Improvement | Resolved | Minor | Fixed | Camille Fournier | Wendy Smoak | Wendy Smoak | 24/Oct/14 11:16 | 25/Oct/14 07:25 | 24/Oct/14 22:39 | 3.5.1 | 0 | 3 | Updates to README.txt - first reference should be to Apache ZooKeeper - fix obsolete ibiblio-rsync-repository url - better describe the release process - minor grammar and punctuation changes |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 21 weeks, 5 days ago | 0|i21jqf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2065 | Debian package java depdencency |
Improvement | Resolved | Minor | Won't Fix | Unassigned | Jeremy Carroll | Jeremy Carroll | 22/Oct/14 20:05 | 03/Mar/16 11:18 | 03/Mar/16 11:17 | 3.4.6, 3.5.0 | 0 | 1 | ZOOKEEPER-1604 | The Debian control file sets a dependency on sun-java6-jre. We currently run Zookeeper in production with Java 1.7. This makes it difficult to support in our environment since it attempts to install an older JRE upon package installation. I propose that we change this line from sun-java6-jre to default-jre. Then the operator of the system can choose which Java version to run with. |
debian | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago | packaging | 0|i21h4n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2064 | Prevent resource leak in various classes |
Bug | Resolved | Critical | Fixed | Ted Yu | Ted Yu | Ted Yu | 21/Oct/14 15:57 | 30/Nov/14 06:23 | 29/Nov/14 10:57 | 3.4.7, 3.5.1, 3.6.0 | 0 | 4 | In various classes, there is potential resource leak. e.g. LogIterator / RandomAccessFileReader is not closed upon return from the method. Corresponding close() should be called to prevent resource leak. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 5 years, 16 weeks, 4 days ago | 0|i21f1r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2063 | Netty+SSL support for client-server communication |
New Feature | Open | Major | Unresolved | Unassigned | Hongchao Deng | Hongchao Deng | 17/Oct/14 16:34 | 24/Oct/17 00:46 | 1 | 10 | ZOOKEEPER-2069, ZOOKEEPER-2094, ZOOKEEPER-2122 | ZOOKEEPER-2068, ZOOKEEPER-2072 | ZooKeeper currently have netty option on server side. We want to support netty on client side too. After that, we could add ssl support based on netty channel. | 100% | 51600 | 0 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 5 weeks, 1 day ago | 0|i21b4f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2062 | RemoveWatchesTest takes forever to run |
Bug | Resolved | Major | Fixed | Chris Nauroth | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 14/Oct/14 19:23 | 16/Dec/18 09:27 | 04/May/15 22:44 | 3.5.0 | 3.5.1, 3.6.0 | tests | 0 | 7 | ZOOKEEPER-1274 | [junit] Running org.apache.zookeeper.RemoveWatchesTest [junit] Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 306.188 sec |
remove_watches | 9223372036854775807 | No Perforce job exists for this issue. | 4 | 9223372036854775807 | 4 years, 46 weeks, 2 days ago | 0|i216dr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2061 | Zookeeper needs an official RPM package. I am happy to build and submit one. |
Wish | Resolved | Major | Won't Fix | Unassigned | Brian Weber | Brian Weber | 14/Oct/14 14:51 | 03/Mar/16 11:24 | 03/Mar/16 11:24 | build | 1 | 3 | ZOOKEEPER-1604 | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i2160v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2060 | Trace bug in NettyServerCnxnFactory |
Bug | Resolved | Major | Fixed | Ian Dimayuga | Ian Dimayuga | Ian Dimayuga | 14/Oct/14 13:58 | 30/Nov/14 06:23 | 19/Nov/14 17:40 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | server | 0 | 5 | 86400 | 86400 | 0% | In NettyServerCnxnFactory, high throughput triggers a deadlock. This is caused by a channel-buffer-dumping debug statement in NettyServerCnxnFactory.java that is executed regardless of log level. This code path only executes when the server is throttling, but when it does it encounters a race and occasional deadlock between the channel buffer and NettyServerCnxn (jstack attached). The proposed fix adds the debug logging guard to this statement, similar to other existing statements. |
0% | 0% | 86400 | 86400 | 9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 5 years, 16 weeks, 4 days ago | 0|i215xb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2059 | Use command like this "./zkCli.sh -server host:port cmd args" but it doesn't work, 3.4.5 version is work fine |
Bug | Resolved | Major | Cannot Reproduce | Unassigned | huanghaijun | huanghaijun | 09/Oct/14 05:03 | 26/Aug/15 16:48 | 26/Aug/15 16:48 | 3.4.6 | 0 | 3 | Use command like this [./zkCli.sh -server host:port cmd args], such as [./zkCli.sh -server localhost:2181 create /test ""] to create a node, 3.4.5 is work fine, but 3.4.6 it doesn't work. for 3.4.5 it is ok zookeeper-3.4.5/bin> ./zkCli.sh -server localhost:34096 create /test "" Connecting to localhost:34096 WATCHER:: WatchedEvent state:SyncConnected type:None path:null Created /test for 3.4.6 it's not ok zookeeper-3.4.6/bin> ./zkCli.sh -server localhost:43096 crate /test1 "" Connecting to localhost:43096 .... 2014-10-10 01:24:44,517 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:43096 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@48b8f82d |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 30 weeks, 1 day ago | 0|i20z7j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2058 | rat: exclude *.cer files |
Bug | Resolved | Major | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 09/Oct/14 01:04 | 13/Oct/14 00:22 | 12/Oct/14 23:20 | 3.6.0 | build | 1 | 5 | Somehow the release audit started complaining about *.cer files. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 23 weeks, 3 days ago | 0|i20yz3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2057 | ZooKeeper should be as fast as Kafka |
Wish | Open | Major | Unresolved | Unassigned | Hongchao Deng | Hongchao Deng | 08/Oct/14 20:52 | 08/Oct/14 20:52 | 0 | 1 | Kafka can achieve ~750K/s records for async replication. We can see that it's highly performant for eventual consistency model. Some might argue that ZooKeeper is a strong consistency model. Nonetheless there is nuance -- ZooKeeper can read stale data. Let's say that stale data is "consistent data". ZK can do async replication and provide only consistent data. This can be easily achieved by MVCC database design. There might be other benefits too, e.g. watcher can now have some kind of version and reconnection won't incur data loss; multi can rollback easily to older version. However, this requires change no less than rewriting ZK. Just raise this topic up and see what people think. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 24 weeks ago | 0|i20ypr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2056 | Zookeeper 3.4.x and 3.5.0-alpha is not OSGi compliant |
Bug | Resolved | Major | Fixed | Deiwin Sarjas | Keren Dong | Keren Dong | 08/Oct/14 11:51 | 21/Aug/15 12:27 | 06/Apr/15 21:55 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | 2 | 9 | ZOOKEEPER-1942, ZOOKEEPER-2242, ZOOKEEPER-1334 | Similar to this issue https://issues.apache.org/jira/browse/ZOOKEEPER-1334, the MANIFEST.MF is still flawed. When using in OSGi, I got this exception: java.lang.NoClassDefFoundError: org/ietf/jgss/GSSException at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1063)[168:org.apache.hadoop.zookeeper:3.5.01] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1114)[168:org.apache.hadoop.zookeeper:3.5.01] Caused by: java.lang.ClassNotFoundException: org.ietf.jgss.GSSException not found by org.apache.hadoop.zookeeper [168] at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1532)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:75)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1955)[org.apache.felix.framework-4.2.1.jar:] at java.lang.ClassLoader.loadClass(ClassLoader.java:356)[:1.7.0_15] ... 2 more Looking at the bundle headers, it doesn't have the package org.ietf.jgss imported: Import-Package = javax.management;resolution:=optional, javax.security.auth.callback, javax.security.auth.login, javax.security.sasl, org.slf4j;version="[1.6,2)", org.jboss.netty.buffer;resolution:=optional;version="[3.2,4)", org.jboss.netty.channel;resolution:=optional;version="[3.2,4)", org.jboss.netty.channel.group;resolution:=optional;version="[3.2,4)", org.jboss.netty.channel.socket.nio;resolution:=optional;version="[3.2,4)", org.osgi.framework;resolution:=optional;version="[1.5,2)", org.osgi.util.tracker;resolution:=optional;version="[1.4,2)" Export-Package = org.apache.zookeeper;version=3.5.01, org.apache.zookeeper.client;version=3.5.01, org.apache.zookeeper.data;version=3.5.01, org.apache.zookeeper.version;version=3.5.01, org.apache.zookeeper.server;version=3.5.01, org.apache.zookeeper.server.auth;version=3.5.01, org.apache.zookeeper.server.persistence;version=3.5.01, org.apache.zookeeper.server.quorum;version=3.5.01, org.apache.zookeeper.common;version=3.5.01 |
easyfix | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 50 weeks, 2 days ago | 0|i20xsv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2055 | Don't throw ArrayIndexOutOfBoundsException when SASL username/password isn't specified |
Bug | Patch Available | Minor | Unresolved | Steve R | Steve R | Steve R | 07/Oct/14 07:56 | 03/Mar/16 01:06 | 0 | 1 | When using SASLAuthenticationProvider and the jaas.conf file doesn't have a username and/or password for either the server or client configuration, when the client tries to connect via zkCli, an ArrayIndexOutOfBoundsException is thrown Example conf file: Server { org.apache.zookeeper.server.auth.DigestLoginModule required; }; Client { org.apache.zookeeper.server.auth.DigestLoginModule required username="bob" password="bob123"; }; Shows the resuting information: INFO [main-SendThread(127.0.0.1:2181)] Client will use DIGEST-MD5 as SASL mechanism. ERROR[main-SendThread(127.0.0.1:2181)] Exception while trying to create SASL client: java.lang.ArrayIndexOutOfBoundsException: Array index out of range: 0 |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 3 weeks ago | 0|i20vnz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2054 | test-patch.sh: don't set ulimit -n |
Bug | Resolved | Major | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 06/Oct/14 21:52 | 08/Oct/14 06:30 | 08/Oct/14 04:18 | 3.6.0 | 0 | 4 | It seems to be causing NioNettySuiteHammerTest failure. | pre-commit | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 24 weeks, 1 day ago | 0|i20v9j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2053 | Zookeeper scripts should honor ZOOKEEPER_HOME |
Bug | Patch Available | Major | Unresolved | Unassigned | Owen O'Malley | Owen O'Malley | 02/Oct/14 16:39 | 02/Oct/14 18:50 | 0 | 1 | Currently the scripts will determine the root of the Zookeeper installation based on the location of the script. However, it would be convenient if the scripts honored the ZOOKEEPER_HOME environment variable like the other Hadoop-related projects. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 5 years, 25 weeks ago | 0|i20qyv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2052 | Unable to delete a node when the node has no children |
Bug | Resolved | Major | Fixed | Hongchao Deng | Yip Ng | Yip Ng | 02/Oct/14 03:08 | 28/Oct/14 07:10 | 28/Oct/14 00:41 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | server | 0 | 8 | ZOOKEEPER-1424 | Red Hat Enterprise Linux 6.1 x86_64, standalone or 3 node ensemble (v3.4.6), 2 Java clients (v3.4.6) | We stumbled upon a ZooKeeper bug where a node with no children cannot be removed on our 3 node ZooKeeper ensemble or standalone ZooKeeper on Red Hat Enterprise Linux x86_64 environment. Here is an example scenario/setup: o Standalone ZooKeeper or 3 node ensemble (v3.4.6) o 2 Java clients (v3.4.6) - Client A creates a persistent node (e.g.: /metadata/resources) - Client B creates ephemeral nodes under this persistent node o Client A attempts to remove the /metadata/resources node via multi op delete but fails since there are children o Client B's session expired, all the ephemeral nodes are removed o Client A attempts to recursively remove /metadata/resources node via multi op, this is expected to succeed but got the following exception: org.apache.zookeeper.KeeperException$NotEmptyException: KeeperErrorCode = Directory not empty (Note that Client B is the only client that creates these ephemeral nodes) o After this, we use zkCli.sh to inspect the problematic node but the zkCli.sh shows the /metadata/resources node indeed have no children but it will not allow /metadata/resources node to get deleted. (shown below) [zk: localhost:2181(CONNECTED) 0] ls / [zookeeper, metadata] [zk: localhost:2181(CONNECTED) 1] ls /metadata [resources] [zk: localhost:2181(CONNECTED) 2] get /metadata/resources null cZxid = 0x3 ctime = Wed Oct 01 22:04:11 PDT 2014 mZxid = 0x3 mtime = Wed Oct 01 22:04:11 PDT 2014 pZxid = 0x9 cversion = 2 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 0 numChildren = 0 [zk: localhost:2181(CONNECTED) 3] delete /metadata/resources Node not empty: /metadata/resources [zk: localhost:2181(CONNECTED) 4] get /metadata/resources null cZxid = 0x3 ctime = Wed Oct 01 22:04:11 PDT 2014 mZxid = 0x3 mtime = Wed Oct 01 22:04:11 PDT 2014 pZxid = 0x9 cversion = 2 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 0 numChildren = 0 o The only ways to remove this node is to either: a) Restart the ZooKeeper server b) set data to /metadata/resources then followed by a subsequent delete. |
9223372036854775807 | No Perforce job exists for this issue. | 11 | 9223372036854775807 | 5 years, 21 weeks, 2 days ago | 0|i20pr3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2051 | Creating ephemeral znodes from within a transaction fail with local sessions |
Bug | Open | Major | Unresolved | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 30/Sep/14 16:08 | 14/Dec/19 06:06 | 3.5.0 | 3.7.0 | server | 0 | 5 | With local sessions enabled, the premise is that as soon as you try to create an ephemeral znode your session will be upgraded to global. The problem is that the session upgrade logic doesn't intercept transactions. So creating an ephemeral znode from within a transaction fails with SessionExpired. A small example with Kazoo: {noformat} from kazoo.client import KazooClient k = KazooClient("localhost:2181") k.start() t = k.transaction() t.create("/hello_", "", ephemeral=True) t.commit() [kazoo.exceptions.SessionExpiredError((), {})] {noformat} A workaround, for now, is to create an ephemeral before your transaction which forces your session to be upgraded. Possible solutions could be: * extending zookeeper_init() so that you can request global=True * and/or, providing an upgradeSession() API Thoughts? cc: [~thawan], [~phunt], [~fpj] |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 21 weeks, 6 days ago | 0|i20ngf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2050 | Maven dependency should remove the 3 dependencies |
Bug | Open | Minor | Unresolved | Unassigned | samraj | samraj | 30/Sep/14 09:35 | 30/Sep/14 09:35 | 3.5.0 | 0 | 2 | Just add the latest zookeeper version in the maven dependency | When i add the latest zookeeper jar in the dependency it throws the error and say following jars are missing.If i added those in exclusion its working fine. <exclusions> <exclusion> <groupId>com.sun.jmx</groupId> <artifactId>jmxri</artifactId> </exclusion> <exclusion> <groupId>com.sun.jdmk</groupId> <artifactId>jmxtools</artifactId> </exclusion> <exclusion> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> </exclusion> </exclusions> |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 25 weeks, 2 days ago | 0|i20mun: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2049 | Yosemite build failure: htonll conflict |
Bug | Resolved | Major | Fixed | Till Toenshoff | Till Toenshoff | Till Toenshoff | 29/Sep/14 14:08 | 19/Aug/18 13:24 | 16/Oct/14 00:55 | 3.4.5, 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | 2 | 8 | 0 | 600 | OSX 10.10 (BETA3), Apple LLVM version 6.0 (clang-600.0.51) (based on LLVM 3.5svn) | recordio.h defines {{htonll}} which conflicts with Apple's equally named implementation from within arpa/inet.h. {noformat} gcc -DHAVE_CONFIG_H -I. -I. -I. -I./include -I./tests -I./generated -Wall -Werror -g -O2 -D_GNU_SOURCE -MT zk_log.lo -MD -MP -MF .deps/zk_log.Tpo -c src/zk_log.c -fno-common -DPIC -o zk_log.o In file included from src/recordio.c:19: ./include/recordio.h:76:9: error: expected ')' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:30: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: to match this '(' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:5: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ In file included from src/recordio.c:19: ./include/recordio.h:76:9: error: conflicting types for '__builtin_constant_p' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: '__builtin_constant_p' is a builtin with type 'int ()' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ In file included from generated/zookeeper.jute.c:20: In file included from ./generated/zookeeper.jute.h:21: ./include/recordio.h:76:9: error: expected ')' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:30: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: to match this '(' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:5: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ In file included from generated/zookeeper.jute.c:20: In file included from ./generated/zookeeper.jute.h:21: ./include/recordio.h:76:9: error: conflicting types for '__builtin_constant_p' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: '__builtin_constant_p' is a builtin with type 'int ()' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ In file included from src/zookeeper.c:27: In file included from ./include/zookeeper.h:34: ./include/recordio.h:76:9: error: expected ')' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:30: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: to match this '(' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:5: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ In file included from src/zookeeper.c:27: In file included from ./include/zookeeper.h:34: ./include/recordio.h:76:9: error: conflicting types for '__builtin_constant_p' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: '__builtin_constant_p' is a builtin with type 'int ()' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ In file included from src/zk_hashtable.c:19: In file included from src/zk_hashtable.h:22: In file included from ./include/zookeeper.h:34: ./include/recordio.h:76:9: error: expected ')' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:30: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: to match this '(' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:5: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ In file included from src/zk_hashtable.c:19: In file included from src/zk_hashtable.h:22: In file included from ./include/zookeeper.h:34: ./include/recordio.h:76:9: error: conflicting types for '__builtin_constant_p' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: '__builtin_constant_p' is a builtin with type 'int ()' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ src/recordio.c:83:9: error: expected ')' int64_t htonll(int64_t v) ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:30: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ src/recordio.c:83:9: note: to match this '(' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:5: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ src/recordio.c:83:9: error: conflicting types for '__builtin_constant_p' int64_t htonll(int64_t v) ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: '__builtin_constant_p' is a builtin with type 'int ()' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ src/recordio.c:83:9: error: definition of builtin function '__builtin_constant_p' int64_t htonll(int64_t v) ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ In file included from src/zk_log.c:23: In file included from ./include/zookeeper_log.h:22: In file included from ./include/zookeeper.h:34: ./include/recordio.h:76:9: error: expected ')' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:30: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: to match this '(' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:5: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ In file included from src/zk_log.c:23: In file included from ./include/zookeeper_log.h:22: In file included from ./include/zookeeper.h:34: ./include/recordio.h:76:9: error: conflicting types for '__builtin_constant_p' int64_t htonll(int64_t v); ^ /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ ./include/recordio.h:76:9: note: '__builtin_constant_p' is a builtin with type 'int ()' /usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll' #define htonll(x) __DARWIN_OSSwapInt64(x) ^ /usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64' (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x)) ^ 2 errors generated. 5 errors generated. 2 errors generated. make[5]: *** [recordio.lo] Error 1 make[5]: *** Waiting for unfinished jobs.... 2 errors generated. make[5]: *** [zookeeper.jute.lo] Error 1 make[5]: *** [zk_hashtable.lo] Error 1 make[5]: *** [zk_log.lo] Error 1 2 errors generated. make[5]: *** [zookeeper.lo] Error 1 make[4]: *** [all] Error 2 make[3]: *** [zookeeper-3.4.5/src/c/libzookeeper_mt.la] Error 2 make[3]: *** Waiting for unfinished jobs.... ln -fs libleveldb.dylib.1.4 libleveldb.dylib ln -fs libleveldb.dylib.1.4 libleveldb.dylib.1 make[2]: *** [all-recursive] Error 1 make[1]: *** [all] Error 2 make: *** [all-recursive] Error 1 {noformat} |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 3 years, 22 weeks, 3 days ago | 0|i20ljb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2048 | Ability to support alphabetic characters in the version string |
Improvement | Open | Major | Unresolved | Unassigned | Chunjun Xiao | Chunjun Xiao | 26/Sep/14 03:10 | 26/Sep/14 03:11 | 3.4.5 | build | 0 | 1 | Following ZOOKEEPER-1598, could we further enhance ZK to support alphabetic characters in the version string? E.g., zookeeper-3.4.5.ABC.1.2.3.4-1.jar. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 25 weeks, 6 days ago | 0|i20igf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2047 | ZOOKEEPER-1833 testTruncationNullLog fails on windows |
Sub-task | Resolved | Major | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 25/Sep/14 13:35 | 27/Sep/14 07:16 | 27/Sep/14 02:32 | 3.4.6 | 3.4.7, 3.5.1, 3.6.0 | tests | 0 | 3 | The calls to delete the log file on windows are failing, so the test ends up failing. The fix is to close the db before deleting. | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 5 years, 25 weeks, 5 days ago | 0|i20hk7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2046 | Compile zookeeper by JDK 7 in default |
Bug | Resolved | Major | Duplicate | Unassigned | Guo Ruijing | Guo Ruijing | 23/Sep/14 02:43 | 02/Dec/14 16:22 | 02/Dec/14 16:22 | 3.5.0 | build | 0 | 7 | ZOOKEEPER-1963 | Currently, zookeeper is compiled by JDK 5 in default as <property name="javac.target" value="1.5" /> <property name="javac.source" value="1.5" /> we may change it to JDK 7 in default |
build | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 5 years, 16 weeks, 2 days ago | 0|i20dqn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2045 | ConnectStringParser public api didn't handle null connect string |
Bug | Patch Available | Minor | Unresolved | Hongchao Deng | Hongchao Deng | Hongchao Deng | 22/Sep/14 19:17 | 22/Sep/14 20:22 | 3.5.0 | 0 | 1 | {code} public final class ConnectStringParser { ... public ConnectStringParser(String connectString) { ... {code} ConnectStringParser is a public api. Besides that, both ZooKeeper constructor and ZooKeeper#updateServerList used it. However, it doesn't handle a null connectString. It doesn't help that much to see a NPE showing up. So I add a check to the constructor. |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 26 weeks, 3 days ago | 0|i20d87: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2044 | CancelledKeyException in zookeeper branch-3.4 |
Bug | Closed | Minor | Fixed | Michael Han | shamjith antholi | shamjith antholi | 22/Sep/14 06:17 | 19/Sep/19 02:36 | 31/Jan/17 11:40 | 3.4.6 | 3.4.10 | server | 3 | 21 | 0 | 600 | ZOOKEEPER-1237, ZOOKEEPER-2677 | Red Hat Enterprise Linux Server release 6.2 | I am getting cancelled key exception in zookeeper (version 3.4.5). Please see the log below. When this error is thrown, the connected solr shard is going down by giving the error "Failed to index metadata in Solr,StackTrace=SolrError: HTTP status 503.Reason: {"responseHeader":{"status":503,"QTime":204},"error":{"msg":"ClusterState says we are the leader, but locally we don't think so","code":503" and ultimately the current activity is going down. Could you please give a solution for this ? Zookeper log ---------------------------------------------------------- 2014-09-16 02:58:47,799 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@832] - Client attempting to renew session 0x24868e7ca980003 at /172.22.0.5:58587 2014-09-16 02:58:47,800 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:Learner@107] - Revalidating client: 0x24868e7ca980003 2014-09-16 02:58:47,802 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@588] - Invalid session 0x24868e7ca980003 for client /172.22.0.5:58587, probably expired 2014-09-16 02:58:47,803 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed socket connection for client /172.22.0.5:58587 which had sessionid 0x24868e7ca980003 2014-09-16 02:58:47,810 [myid:1] - ERROR [CommitProcessor:1:NIOServerCnxn@180] - Unexpected Exception: java.nio.channels.CancelledKeyException at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:55) at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:59) at org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:153) at org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1076) at org.apache.zookeeper.server.NIOServerCnxn.process(NIOServerCnxn.java:1113) at org.apache.zookeeper.server.DataTree.setWatches(DataTree.java:1327) at org.apache.zookeeper.server.ZKDatabase.setWatches(ZKDatabase.java:384) at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:304) at org.apache.zookeeper.server.quorum.CommitProcessor.run(CommitProcessor.java:74) |
100% | 100% | 600 | 0 | pull-request-available | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 3 years, 6 weeks, 4 days ago | 0|i20brr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2043 | Too many connections,maxClientCnxns don't close |
Bug | Open | Major | Unresolved | Unassigned | hikin | hikin | 22/Sep/14 05:15 | 17/Sep/19 08:54 | 0 | 3 | org.apache.zookeeper.server.NIOServerCnxn void doIO(SelectionKey k) throws InterruptedException { try { if (isSocketOpen() == false) { LOG.warn("trying to do i/o on a null socket for session:0x" + Long.toHexString(sessionId)); return; } public void close() { if (!factory.removeCnxn(this)) { return; } If the socket suddenly broken, do not have the right to clean up the connection, this one line of code that caused a lot of links, eventually exceed the maximum maxClientCnxns, cause the client end connections do not go up. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 26 weeks, 2 days ago | 0|i20bo7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2042 | zkServer.sh does not work properly on Solaris |
Bug | Resolved | Minor | Duplicate | Chris Nauroth | John Lindwall | John Lindwall | 17/Sep/14 13:44 | 01/May/15 01:56 | 01/May/15 01:56 | 3.4.6 | scripts | 1 | 3 | ZOOKEEPER-1927 | Solaris 5.11 | There are two issues in the zkServer.sh script that make it not work properly out of the box on Solaris. 1. The bin/zkServer.sh script uses plain "echo" in all instances but one: when writing the pid to the pid file. In that instance it uses "/bin/echo". The "/bin/echo" command on Solaris does not understand the "-n" parameter and interprets it as a literal string, so the "-n" gets written into the pid file along with the pid. This causes the "stop" command to fail. 2. The /bin/grep command in Solaris does not understand special character classes like "[[:space:]]". You must use the alternate posix version of grep as found in /usr/xpg4/bin/grep for this to work. If the script cannot be made completely generic then at least we should document the need to use the posix grep implementation on Solaris. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 46 weeks, 6 days ago | 0|i20647: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2041 | maxClientCnxns should limit connections for each process rather than ip |
Improvement | Open | Major | Unresolved | Unassigned | chendihao | chendihao | 16/Sep/14 22:10 | 16/Sep/14 22:10 | 3.4.4 | server | 1 | 1 | Recently I learned more about maxClientCnxns configuration and read the read the code of implement. I know now it's the limitation of the number of connection from the same ip. But actually we may run multiple process in the same server. And if one process excesses the limitation of maxClientCnxns, all the ZooKeeper clients will fail to connect with ZooKeeper cluster. Can we fix that to make this limitation for each process? Any suggestion is welcome. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 27 weeks, 1 day ago | 0|i204zb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2040 | Server to log underlying cause of SASL connection problems |
Improvement | Closed | Major | Fixed | Steve Loughran | Steve Loughran | Steve Loughran | 16/Sep/14 12:05 | 21/Jul/16 16:18 | 11/Sep/15 02:06 | 3.4.6 | 3.4.7, 3.5.2, 3.6.0 | server | 0 | 7 | When you have SASL connectivity problems, you spend time staring at logs —ideally logs with stack traces. ZK server can help here by including the stack traces when there is a SASL auth problem, rather than just giving the text of the exception. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 27 weeks, 6 days ago |
Reviewed
|
0|i203vz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2039 | Jute compareBytes incorrect comparison index |
Bug | Resolved | Minor | Fixed | Ian Dimayuga | Ian Dimayuga | Ian Dimayuga | 15/Sep/14 22:34 | 28/Sep/14 05:48 | 18/Sep/14 11:51 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1 | jute | 0 | 5 | 86400 | 86400 | 0% | The Jute utility's naïve byte-array comparison compares b1[off1+i] with b2[off2+1]. (A literal 1, not the variable i) It should be off2+i, in parallel with the other operand. |
0% | 0% | 86400 | 86400 | 9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 5 years, 25 weeks, 4 days ago | 0|i202qn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2038 | Coding error in recipes/lock/src/c/src/zoo_lock.c |
Bug | Open | Minor | Unresolved | Unassigned | YuQing | YuQing | 15/Sep/14 22:23 | 15/Sep/14 22:23 | 3.4.6 | recipes | 0 | 1 | arch linux 64bit | In fuction child_floor(), strcmp() is used to compare the whole string. But there exists conditions a sorted_data looks like ("x-000-00", "x-222-01", "x-111-02"), and now "x-222-01" is calling child_floor() to get a predecessor for watching, so the current logic will return "x-111-02" instead of the correct "x-000-00". Use a strcmp() == 0 and a break statement should solve this problem. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 27 weeks, 2 days ago | 0|i202pz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2037 | ZOOKEEPER-2016 ZooKeeper methods to wait on client connection (re)establishment. |
Sub-task | Resolved | Major | Won't Fix | Hongchao Deng | Hongchao Deng | Hongchao Deng | 15/Sep/14 17:00 | 27/Jan/15 23:32 | 27/Jan/15 23:32 | 3.5.0 | java client | 0 | 3 | When a ZooKeeper object is created and returned, it is not guaranteed that connection to a server is established. Usually, a wait/signal pattern is used for the ZK watcher {code} latch = new CountDownLatch(1) zk = new ZooKeeper(..., new Watcher() { override void process(WatchedEvent event) { if (event.type = SyncConnected) { latch.countDown() } } },...) latch.await(); // connection has been established. do something with zk. {code} There are two disadvantages: 1. The latch object isn't being garbage-collected. Because the watcher keeps monitoring all kinds of events. 2. With the introduction of dynamic reconfig, client move to other servers on needed and this latch method doesn't work so well. Here I propose to add (both sync and async) wait methods to act as latch for connection establish such that it becomes much easier to manage and work around: {code} zk = new ZooKeeper(...) zk.waitUntilConnected() {code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 8 weeks, 1 day ago | 0|i202dr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2036 | Client which is not authorized able to access the Secure Data which is created by the Secure Client |
Bug | Resolved | Blocker | Not A Problem | Unassigned | Brahma Reddy Battula | Brahma Reddy Battula | 15/Sep/14 04:08 | 28/Oct/14 09:24 | 28/Oct/14 09:24 | 3.4.5 | server | 0 | 2 | *{color:blue}Scenario:{color}* Started the Secure ZK Cluster. Logged with Secure ZK Client(by passing valid jaas.conf) and created the Znodes Now logged in to same secure cluster using unsecure ZKClient (without jaas.conf) to same Cluster and able to access the data which is created by the Secured Client.. *{color:blue}Secured Client{color}:(which is created the Znodes)* 2014-09-15 13:40:56,288 [myid:] - INFO [main-SendThread(localhost:2181):ZooKeeperSaslClient$1@285] - Client will use GSSAPI as SASL mechanism. 2014-09-15 13:40:56,296 [myid:] - INFO [Thread-1:Login@301] - TGT valid starting at: Mon Sep 15 13:40:56 IST 2014 2014-09-15 13:40:56,296 [myid:] - INFO [Thread-1:Login@302] - TGT expires: Tue Sep 16 13:40:56 IST 2014 2014-09-15 13:40:56,296 [myid:] - INFO [Thread-1:Login$1@181] - TGT refresh sleeping until: Tue Sep 16 09:36:04 IST 2014 2014-09-15 13:40:56,302 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1000] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 2014-09-15 13:40:56,308 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@855] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session 2014-09-15 13:40:56,344 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1260] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x1486856657e0016, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null WATCHER:: WatchedEvent state: *{color:red}SaslAuthenticated{color}* type:None path:null [zk: localhost:2181(CONNECTED) 1] create -s /tmp-seq 'sd:er:' Created /tmp-seq0000000003 [zk: localhost:2181(CONNECTED) 2] create -s /tmp-seq 'sd:er:' Created /tmp-seq0000000004 [zk: localhost:2181(CONNECTED) 0] ls / [tmp-seq0000000004, tmp-seq0000000003, hadoop, hadoop-ha, tmp-seq0000000002, zookeeper] *{color:blue}UnSecured Client{color}:(which is Accesing Znodes)* Welcome to ZooKeeper! 2014-09-15 13:00:30,440 [myid:] - WARN [main-SendThread(localhost:2181):ClientCnxn$SendThread@982] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/****/zookeeper/conf/jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 014-09-15 13:00:30,441 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1000] - Opening socket connection to server localhost/127.0.0.1:2181 WatchedEvent state: *{color:red}AuthFailed{color}* type:None path:null JLine support is enabled 2014-09-15 13:00:30,451 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@855] - Socket connection established to localhost/127.0.0.1:2181, initiating session [zk: localhost:2181(CONNECTING) 0] 2014-09-15 13:00:30,488 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1260] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x348685662250005, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0] ls / [tmp-seq0000000004, tmp-seq0000000003, hadoop, hadoop-ha, tmp-seq0000000002, zookeeper] [zk: localhost:2181(CONNECTED) 1] get /tmp-seq000000000 tmp-seq0000000004 tmp-seq0000000003 tmp-seq0000000002 [zk: localhost:2181(CONNECTED) 1] get /tmp-seq0000000002 '' cZxid = 0x100000040 ctime = Mon Sep 15 12:51:50 IST 2014 mZxid = 0x100000040 mtime = Mon Sep 15 12:51:50 IST 2014 pZxid = 0x100000040 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 2 numChildren = 0 |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 21 weeks, 2 days ago | 0|i201br: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2035 | diagnostics on SASL connection problems doesn't match error strings sent back |
Bug | Open | Minor | Unresolved | Unassigned | Steve Loughran | Steve Loughran | 13/Sep/14 13:33 | 15/Dec/15 12:47 | 3.4.6 | 0 | 3 | ZOOKEEPER-2344 | Java 1.7.0.67 on OS/X | The diagnostics code in {{ZooKeeperSaslClient.createSaslToken()}} which looks for a {{"("Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)"}} error string isn't finding a match ... the text now appears to be {{(Mechanism level: Server not found in Kerberos database (7) - Server not found in Kerberos database)}} | 9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 27 weeks, 5 days ago | 0|i200dz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2034 | StringIndexOutOfBoundsException in createSaslServer |
Bug | Resolved | Minor | Not A Problem | Unassigned | Steve Loughran | Steve Loughran | 13/Sep/14 13:09 | 13/Sep/14 13:11 | 13/Sep/14 13:11 | 3.4.6 | 0 | 1 | I'm seeing {{StringIndexOutOfBoundsException}} in {{createSaslServer}}, where my test kerberos code is (presumably) is not correctly set up. Looking at the comments, it hints that the problem is my principals are called {{zookeeper@EXAMPLE.COM}}, which doesn't match the pattern {{principal/host@realm}} |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 27 weeks, 5 days ago | 0|i200dj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2033 | zookeeper follower fails to start after a restart immediately following a new epoch |
Bug | Resolved | Major | Fixed | Asad Saeed | Asad Saeed | Asad Saeed | 10/Sep/14 14:14 | 03/Sep/15 20:06 | 03/Sep/15 19:00 | 3.4.6 | 3.4.7 | quorum | 0 | 5 | The following issue was seen when adding a new node to a zookeeper cluster. Reproduction steps 1. Create a 2 node ensemble. Write some keys. 2. Add another node to the ensemble, by modifying the config. Restarting entire cluster. 3. Restart the new node before writing any new keys. What occurs is that the new node gets a SNAP from the newly elected leader, since it is too far behind. The zxid for this snapshot is from the new epoch but that is not in the committed log cache. On restart of this new node. The follower sends the new epoch zxid. The leader looks at it's maxCommitted logs, and sees that it is not the newest epoch, and therefore sends a TRUNC. The follower sees the TRUNC but it only has a snapshot, so it cannot truncate! |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 4 years, 29 weeks ago | 0|i1zw5j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2032 | ReconfigBackupTest didn't clean up resources. |
Test | Resolved | Minor | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 09/Sep/14 19:33 | 17/Sep/14 10:11 | 10/Sep/14 03:03 | 3.5.0 | 3.5.1 | tests | 0 | 4 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 27 weeks, 1 day ago | 0|i1zuun: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2031 | Support tagging a QuorumServer |
Improvement | Patch Available | Major | Unresolved | some one | some one | some one | 09/Sep/14 16:43 | 05/Feb/20 07:11 | 3.7.0, 3.5.8 | server | 0 | 5 | Currently ZooKeeper only allows using the server id which is an integer for identifying servers. For my (unavoidable) use case, there may be concurrent dynamic removes and adds of servers which may eventually have id collisions. When this occurs, there is no good way to determine if the server (given an id collision) that we want to remove is the right server. To support my use case, I propose that we add a tag field to the server string. For my specific use case, this tag field will be used to store a uuid as a string. So for example: server.1=127.0.0.1:1234:1236:participant;0.0.0.0:1237;743b9d23-85cb-45b1-8949-930fdabb21f0 |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 1 year, 45 weeks ago | 0|i1zuo7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2030 | dynamicConfigFile should have an absolute path, not a relative path, to the dynamic configuration file |
Bug | Resolved | Minor | Fixed | Alexander Shraer | Alexander Shraer | Alexander Shraer | 07/Sep/14 00:42 | 17/Sep/14 07:11 | 17/Sep/14 01:36 | 3.5.0 | 3.5.1, 3.6.0 | server | 0 | 5 | a relative path doesn't seem like a good idea since it will work only if we start the server from the same directory as we did previously. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 27 weeks, 1 day ago | 0|i1zrpz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2029 | Leader.LearnerCnxAcceptor should handle exceptions in run() |
Bug | Resolved | Minor | Fixed | Rakesh Radhakrishnan | Asad Saeed | Asad Saeed | 05/Sep/14 19:21 | 08/May/15 14:55 | 11/Apr/15 14:49 | 3.4.6 | 3.5.1, 3.6.0 | quorum | 0 | 5 | ZOOKEEPER-602, ZOOKEEPER-1907 | Leader.LearnerCnxAcceptor swallows exceptions and shuts itself down. It should instead crash the Leader. | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 4 years, 49 weeks, 5 days ago | 0|i1zqzr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2028 | TestClient#testAuth aborts because ASSERT throws exception again in destructor when there is active exception already |
Bug | Patch Available | Minor | Unresolved | Qiang Tian | Qiang Tian | Qiang Tian | 05/Sep/14 03:20 | 19/Jul/17 16:25 | 3.4.6 | tests | 0 | 3 | linux | Hi Guys, the testcase consistently fails if debug is turned on(set zoo_set_debug_level(ZOO_LOG_LEVEL_DEBUG) in TestDriver.cc); if debug is OFF, it fails for the first time, subsequent runs succeed. can someone help take a look? thanks! below is related info: 1. screen output {quote} [exec] Zookeeper_simpleSystem::testPing : elapsed 17200 : OK [exec] Zookeeper_simpleSystem::testAcl : elapsed 1014 : OK [exec] Zookeeper_simpleSystem::testChroot : elapsed 3041 : OK [exec] terminate called after throwing an instance of 'CppUnit::Exception' [exec] what(): equality assertion failed [exec] - Expected: 0 [exec] - Actual : -116 [exec] [exec] make: *** [run-check] Aborted (core dumped) [exec] Zookeeper_simpleSystem::testAuth {quote} 2. last lines in zk server log: {quote} 2014-09-04 21:13:57,711 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:22181:ZooKeeperServer@868] - Client attempting to establish new session at /127.0.0.1:34992 2014-09-04 21:13:57,714 [myid:] - INFO [SyncThread:0:ZooKeeperServer@617] - Established session 0x14844044d96000a with negotiated timeout 10000 for client /127.0.0.1:34992 2014-09-04 21:14:01,039 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:22181:ZooKeeperServer@892] - got auth packet /127.0.0.1:34992 2014-09-04 21:14:01,747 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:22181:ZooKeeperServer@926] - auth success /127.0.0.1:34992 2014-09-04 21:14:01,912 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:22181:NIOServerCnxn@362] - Exception causing close of session 0x14844044d96000a due to java.io.IOException: Connection reset by peer 2014-09-04 21:14:01,914 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:22181:NIOServerCnxn@1007] - Closed socket connection for client /127.0.0.1:34992 which had sessionid 0x14844044d96000a 2014-09-04 21:14:12,000 [myid:] - INFO [SessionTracker:ZooKeeperServer@347] - Expiring session 0x14844044d96000a, timeout of 10000ms exceeded 2014-09-04 21:14:12,001 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14844044d96000a {quote} 3. last lines in TEST-Zookeeper_simpleSystem-mt.txt: {quote} 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.6 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_INFO@log_env@716: Client environment:host.name=localhost 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_INFO@log_env@724: Client environment:os.arch=2.6.32-358.el6.x86_64 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Tue Jan 29 11:47:41 EST 2013 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_INFO@log_env@733: Client environment:user.name=tianq 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_INFO@log_env@741: Client environment:user.home=/home/tianq 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_INFO@log_env@753: Client environment:user.dir=/home/tianq/zookeeper/build/test/test-cppunit 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=127.0.0.1:22181 sessionTimeout=10000 watcher=0x42e590 sessionId=0 sessionPasswd=<null> context=0x7fff695ea9a0 flags=0 2014-09-04 21:13:57,703:383481(0x7f8866c4b720):ZOO_DEBUG@start_threads@221: starting threads... 2014-09-04 21:13:57,704:383481(0x7f8857fff700):ZOO_DEBUG@do_io@367: started IO thread 2014-09-04 21:13:57,704:383481(0x7f8857fff700):ZOO_INFO@check_events@1705: initiated connection to server [127.0.0.1:22181] 2014-09-04 21:13:57,704:383481(0x7f88667f9700):ZOO_DEBUG@do_completion@459: started completion thread 2014-09-04 21:13:57,714:383481(0x7f8857fff700):ZOO_INFO@check_events@1752: session establishment complete on server [127.0.0.1:22181], sessionId=0x14844044d96000a, negotiated timeout=10000 2014-09-04 21:13:57,714:383481(0x7f8857fff700):ZOO_DEBUG@check_events@1758: Calling a watcher for a ZOO_SESSION_EVENT and the state=ZOO_CONNECTED_STATE 2014-09-04 21:13:57,714:383481(0x7f88667f9700):ZOO_DEBUG@process_completions@2113: Calling a watcher for node [], type = -1 event=ZOO_SESSION_EVENT 2014-09-04 21:13:58,704:383481(0x7f8866c4b720):ZOO_DEBUG@send_last_auth_info@1353: Sending auth info request to 127.0.0.1:22181 {quote} If I understand correctly, it fails because assert expected 0, but looking at the testcase log, "Sending auth info request to .." appears for the first time, so it should correspond to the first zoo_add_auth call in testAuth. but its expected value is ZBADARGUMENTS...? |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 2 years, 35 weeks, 1 day ago | 0|i1zpqn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2027 | Dynamic quorum weight shifting between datacenters for follow the sun operations |
Improvement | Open | Minor | Unresolved | Unassigned | Hari Sekhon | Hari Sekhon | 04/Sep/14 05:31 | 04/Sep/14 05:31 | 0 | 1 | Relating to ZOOKEEPER-107 which has just added dynamic membership configuration, I'd like to propose a simplified one command weight shifting between datacenters. This will allow for a globalized quorum where the primary datacenter in a follow the sun model gets highest weighting and can achieve low latency quorum without going over the wan. Therefore it's workload can be prioritized during it's business hours. WANdisco has this capability which is used for it's globalized HDFS namespace control. Obviously the current quorum majority DC must be accessible in order to initiate the quorum failover in such as scenario and the follow the sun nature of this idea also requires this to be scheduler friendly to automatically follow the sun and shift quorum majority voting several times in a 24-hour period. A single cronned zookeeper command on any zookeeper server should trigger the global coordination and handover of quorum majority to the designated DC. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 29 weeks ago | 0|i1zocn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2026 | Startup order in ServerCnxnFactory-ies is wrong |
Bug | Resolved | Minor | Fixed | Stevo Slavić | Stevo Slavić | Stevo Slavić | 02/Sep/14 03:21 | 29/Sep/14 06:31 | 28/Sep/14 13:23 | 3.4.6, 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | jmx, server | 0 | 5 | {{NIOServerCnxnFactory}} and {{NettyServerCnxnFactory}} {{startup}} method implementations are binding {{ZooKeeperServer}} too late, so in {{ZooKeeperServer}} in its startup can fail to register appropriate JMX MBean. See [this|http://mail-archives.apache.org/mod_mbox/zookeeper-user/201409.mbox/%3CCAAUywg9-ad3oWfqRWahB9PyBEbg6%2Bd%3DDyj5PAUU7A%3Dm9wRncaw%40mail.gmail.com%3E] post on ZK user mailing list for more details. |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 5 years, 25 weeks, 3 days ago | jmx | 0|i1zkfr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2025 | Single-node ejection caused apparent reconnection storm, leading to cluster unresponsiveness |
Bug | Open | Major | Unresolved | Unassigned | Stephen Tyree | Stephen Tyree | 29/Aug/14 16:21 | 16/Oct/14 13:40 | 3.4.5 | c client, server | 1 | 7 | Description will be included in an attached PDF. The two main questions we have are: 1: What would be the cause of the "Unreasonable Length" error in our context, and how might we prevent it from occurring? 2: What can we do to prevent the reconnection storm that led to the cluster becoming unresponsive? |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 23 weeks ago | 0|i1zi5r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2024 | Major throughput improvement with mixed workloads |
Improvement | Resolved | Major | Fixed | Kfir Lev-Ari | Kfir Lev-Ari | Kfir Lev-Ari | 28/Aug/14 11:41 | 17/Oct/19 06:24 | 15/May/16 17:35 | 3.6.0 | quorum, server | 2 | 23 | ZOOKEEPER-3182, ZOOKEEPER-2684, ZOOKEEPER-3585, ZOOKEEPER-1609 | The patch is applied to the commit processor, and solves two problems: 1. Stalling - once the commit processor encounters a local write request, it stalls local processing of all sessions until it receives a commit of that request from the leader. In mixed workloads, this severely hampers performance as it does not allow read-only sessions to proceed at faster speed than read-write ones. 2. Starvation - as long as there are read requests to process, older remote committed write requests are starved. This occurs due to a bug fix (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing of local read requests before handling any committed write. The problem is only manifested under high local read load. Our solution solves these two problems. It improves throughput in mixed workloads (in our tests, by up to 8x), and reduces latency, especially higher percentiles (i.e., slowest requests). The main idea is to separate sessions that inherently need to stall in order to enforce order semantics, from ones that do not need to stall. To this end, we add data structures for buffering and managing pending requests of stalled sessions; these requests are moved out of the critical path to these data structures, allowing continued processing of unaffected sessions. Please see the docs: 1) https://goo.gl/m1cINJ - includes a detailed description of the new commit processor algorithm. 2) The attached patch implements our solution, and a collection of related unit tests (https://reviews.apache.org/r/25160) 3) https://goo.gl/W0xDUP - performance results. (See https://issues.apache.org/jira/browse/ZOOKEEPER-2023 for the corresponding new system test that produced these performance measurements) See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 |
9223372036854775807 | No Perforce job exists for this issue. | 13 | 9223372036854775807 | 3 years, 6 weeks, 2 days ago | 0|i1zg0v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2023 | Improved system test |
Test | Patch Available | Minor | Unresolved | Kfir Lev-Ari | Kfir Lev-Ari | Kfir Lev-Ari | 28/Aug/14 07:29 | 11/Sep/14 05:10 | 3.5.0 | contrib-fatjar | 0 | 2 | Adding the ability to perform a system test of mixed workloads using read-only/mixed/write-only clients. In addition, adding few basic latency statistics. https://reviews.apache.org/r/25217/ Just in case it'll help someone, here is an example of how to run generate load system test: 1. Checkout zookeeper-trunk 2. Go to zookeeper-trunk, run "ant jar compile-test" 3. Go to zookeeper-trunk\src\contrib\fatjar, run "ant jar" 4. Copy zookeeper-dev-fatjar.jar from zookeeper-trunk\build\contrib\fatjar to each of the machines you wish to use. 5. On each server, assuming that you've created a valid ZK config file (e.g., zk.cfg) and a dataDir, run: 5.1 java -jar zookeeper-dev-fatjar.jar server ./zk.cfg & 5.2 java -jar zookeeper-dev-fatjar.jar ic <name of this server>:<its client port> <name of this server>:<its client port> /sysTest & 6. And finally, in order to run the test (from some machine), execute the command: java -jar zookeeper-dev-fatjar.jar generateLoad <name of one of the servers>:<its client port> /sysTest <number of servers> <number of read-only clients> <number of mixed workload clients> <number of write-only clients> Note that "/sysTest" is the same name that we used in 5.2. You'll see "Preferred List is empty" message, and after few seconds you should get notifications of "Accepted connection from Socket[....". Afterwards, just set the percentage of the mixed workload clients by entering "percentage <number>" and the test will start. Some explanation regarding the new output (which is printed every 6 seconds, and is reset every time you enter a new percentage). Interval: <interval number> <time> Test info: <number of RO clients>xRO <number of mixed workload clients>x<their write percentage>%W <number of write only clients>xWO, percentiles [0.5, 0.9, 0.95, 0.99] Throughput: <current interval throughput> | <minimum throughput until now> <average throughput until now> <maximum throughput until now> Read latency: interval [<interval's read latency values according to the percentiles>], total [<read latency values until now, according to the percentiles>] Write latency: interval [interval's write latency values according to the percentiles], total [<write latency values until now, according to the percentiles>] Note that the throughput is requests per second, and latency is in ms. In addition, if you perform a read only test / write only test, you won't see the printout of write / read latency. |
9223372036854775807 | No Perforce job exists for this issue. | 2 | 9223372036854775807 | 5 years, 28 weeks ago | 0|i1zfq7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2022 | The log of sessionTimeout is inaccurate |
Improvement | Open | Minor | Unresolved | Unassigned | chendihao | chendihao | 28/Aug/14 04:29 | 28/Aug/14 06:10 | java client | 0 | 2 | When the client constructs ZooKeeper object, it will record the basic info in log. But its sessionTimeout may be inaccurate if it's not equal to negotiationSessionTimeout. Can we change the description of this info? {code:java} public ZooKeeper(String connectString, int sessionTimeout, Watcher watcher) throws IOException { LOG.info("Initiating client connection, connectString=" + connectString + " sessionTimeout=" + sessionTimeout + " watcher=" + watcher); watchManager.defaultWatcher = watcher; cnxn = new ClientCnxn(connectString, sessionTimeout, this, watchManager); cnxn.start(); } {code} |
9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 30 weeks ago | 0|i1zfk7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2021 | ZKserver.cmd fails usng the config param |
Bug | Open | Major | Unresolved | Unassigned | Alvaro Gareppe | Alvaro Gareppe | 26/Aug/14 09:22 | 26/Aug/14 09:22 | 3.4.6 | server | 0 | 1 | Windows | When usgin the command like this: zkServer.cmd zoo.cfg we get this error: ERROR: Invalid arguments, exiting abnormally java.lang.NumberFormatException: For input string :"C:\Development\zookeeperserver-3.4.6\bin\..\conf\zoo.cfg" Patch (Workaround): change the code in zkServer.cfg to: setlocal call "%~dp0zkEnv.cmd" set ZOOCFG=%ZOOCFGDIR%\%1 set ZOOMAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain echo on java "-Dzookeeper.log.dir=%ZOO_LOG_DIR%" "-Dzookeeper.root.logger=%ZOO_LOG4J_PROP%" -cp "%CLASSPATH%" %ZOOMAIN% "%ZOOCFG%" endlocal |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 30 weeks, 2 days ago | 0|i1zc9z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2020 | ZOOKEEPER-1549 Change TRUNC to SNAP in sync phase for safety guarantee |
Sub-task | Open | Major | Unresolved | Unassigned | Hongchao Deng | Hongchao Deng | 24/Aug/14 15:04 | 05/Feb/20 07:16 | 3.5.0 | 3.7.0, 3.5.8 | quorum | 0 | 2 | ZOOKEEPER-1549 discusses the problem: "When the leader started, it will apply every txn in its txnlog (incl. uncommitted ones) into its in-memory data tree" I didn't any solution so far solved this problem in 3.5.x. Since this affects only TRUNC part -- only old leader that needs TRUNC applies uncommitted txns, a simple fix would be change current TRUNC logic to SNAP. This isn't hard to implement, but guarantees safety. Ideally, we will solve the whole problem by untangling all compatibility issues and fixing the protocol. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 30 weeks, 4 days ago | 0|i1za0n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2019 | Unhandled exception when setting invalid limits data in /zookeeper/quota/some/path/zookeeper_limits |
Bug | Patch Available | Major | Unresolved | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 22/Aug/14 19:23 | 05/Feb/20 07:11 | 3.7.0, 3.5.8 | server | 0 | 5 | If you have quotas properly set for a given path, i.e.: {noformat} create /zookeeper/quota/test/zookeeper_limits 'count=1,bytes=100' create /zookeeper/quota/test/zookeeper_stats 'count=1,bytes=100' {noformat} and then you update the limits znode with bogus data, i.e.: {noformat} set /zookeeper/quota/test/zookeeper_limits '' {noformat} you'll crash the cluster because IllegalArgumentException isn't handled when dealing with quotas znodes: https://github.com/apache/zookeeper/blob/ZOOKEEPER-823/src/java/main/org/apache/zookeeper/server/DataTree.java#L379 https://github.com/apache/zookeeper/blob/ZOOKEEPER-823/src/java/main/org/apache/zookeeper/server/DataTree.java#L425 We should handle IllegalArgumentException. Optionally, we should also throw BadArgumentsException from PrepRequestProcessor. Review Board: https://reviews.apache.org/r/25968/ |
9223372036854775807 | No Perforce job exists for this issue. | 5 | 9223372036854775807 | 1 year, 17 weeks, 1 day ago | 0|i1z94v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2018 | Zookeper node fails to boot if writes are reordered |
Bug | Open | Major | Unresolved | Unassigned | Samer Al-Kiswany | Samer Al-Kiswany | 22/Aug/14 14:17 | 22/Aug/14 21:43 | 3.4.6 | 0 | 2 | After studying the steps ZooKeeper takes to update the logs we found the following bug. The bug may manifest in file systems with writeback buffering. If you run the zookeeper client script (zkCli.sh) with the following commands: VALUE=”8KB value” # 8KB in size create /dir1 $VALUE create /dir1/dir2 $VALUE the strace generated at the zookeeprer node is: mkdir(v) create(v/log) append(v/log) trunk(v/log) … fdatasync(v/log) write(v/log) ……. 1 write(v/log) ……. 2 write(v/log) ……. 3 fdatasync(v/log) The last four calls are related to the second create of dir2. If the last write (#3) goes to disk before the second write (#2) and the system crashes before #2 reaches the disk, the zookeeper node will not boot. |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 5 years, 30 weeks, 5 days ago | 0|i1z8nb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2017 | New tests for reconfig failure cases |
Test | Resolved | Minor | Fixed | Alexander Shraer | Alexander Shraer | Alexander Shraer | 22/Aug/14 10:10 | 07/May/15 14:35 | 24/Aug/14 01:46 | 3.5.0 | 3.5.1, 3.6.0 | tests | 0 | 5 | ZOOKEEPER-2182 | 1) New test file with some reconfig failure cases. 2) Moved testLeaderTimesoutOnNewQuorum from ReconfigTest to the new file 3) Added a check to standaloneDisabledTest.java |
9223372036854775807 | No Perforce job exists for this issue. | 3 | 9223372036854775807 | 4 years, 46 weeks ago | 0|i1z8bz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2016 | Automate client-side rebalancing |
Improvement | Open | Major | Unresolved | Michael Han | Hongchao Deng | Hongchao Deng | 21/Aug/14 13:23 | 01/Nov/16 20:12 | 0 | 8 | ZOOKEEPER-2037 | ZOOKEEPER-1355 introduced client-side rebalancing, which is implemented in both the C and Java client libraries. However, it requires the client to detect a configuration change and call updateServerList with the new connection string (see reconfig manual). It may be better if the client just indicates that he is interested in this feature when creating a ZK handle and we'll detect configuration changes and invoke updateServerList for him underneath the hood. Reviewboard: https://reviews.apache.org/r/25599/ |
9223372036854775807 | No Perforce job exists for this issue. | 6 | 9223372036854775807 | 3 years, 50 weeks, 6 days ago | 0|i1z6yv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2015 | I found memory leak in zk client for c++ |
Bug | Resolved | Minor | Not A Problem | Unassigned | gao fengpu | gao fengpu | 21/Aug/14 06:00 | 12/Nov/15 18:11 | 25/Aug/14 01:55 | 3.4.6 | 3.4.6 | c client | 0 | 5 | ZOOKEEPER-1556 | linux/centos 6.3 | ==15070== 895,632 bytes in 57,640 blocks are indirectly lost in loss record 370 of 371 ==15070== at 0x4C2677B: calloc (vg_replace_malloc.c:593) ==15070== by 0x4C59BB: deserialize_String_vector (zookeeper.jute.c:245) ==15070== by 0x4C5AE7: deserialize_GetChildrenResponse (zookeeper.jute.c:874) ==15070== by 0x4BEE7E: zookeeper_process (zookeeper.c:1906) ==15070== by 0x4BFF8E: do_io (mt_adaptor.c:439) ==15070== by 0x4E36850: start_thread (in /lib64/libpthread-2.12.so) ==15070== by 0x58D367C: clone (in /lib64/libc-2.12.so) ==15070== ==15070== 1,946,648 (1,051,016 direct, 895,632 indirect) bytes in 64,035 blocks are definitely lost in loss record 371 of 371 ==15070== at 0x4C2677B: calloc (vg_replace_malloc.c:593) ==15070== by 0x4C59BB: deserialize_String_vector (zookeeper.jute.c:245) ==15070== by 0x4C5AE7: deserialize_GetChildrenResponse (zookeeper.jute.c:874) ==15070== by 0x4BEE7E: zookeeper_process (zookeeper.c:1906) ==15070== by 0x4BFF8E: do_io (mt_adaptor.c:439) ==15070== by 0x4E36850: start_thread (in /lib64/libpthread-2.12.so) ==15070== by 0x58D367C: clone (in /lib64/libc-2.12.so) |
9223372036854775807 | No Perforce job exists for this issue. | 0 | 9223372036854775807 | 4 years, 19 weeks ago | it's not a bug. the client user muster release the memory manually. | 0|i1z6fr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2014 | Only admin should be allowed to reconfig a cluster |
Bug | Closed | Blocker | Fixed | Michael Han | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 20/Aug/14 22:57 | 17/May/17 23:44 | 13/Nov/16 15:10 | 3.5.0 | 3.5.3, 3.6.0 | server | 0 | 15 | ZOOKEEPER-2642, ZOOKEEPER-107 | ZOOKEEPER-107 introduces reconfiguration support via the reconfig() call. We should, at the very least, ensure that only the Admin can reconfigure a cluster. Perhaps restricting access to /zookeeper/config as well, though this is debatable. Surely one could ensure Admin only access via an ACL, but that would leave everyone who doesn't use ACLs unprotected. We could also force a default ACL to make it a bit more consistent (maybe). Finally, making reconfig() only available to Admins means they have to run with zookeeper.DigestAuthenticationProvider.superDigest (which I am not sure if everyone does, or how would it work with other authentication providers). Review board https://reviews.apache.org/r/51546/ |
9223372036854775807 | No Perforce job exists for this issue. | 15 | 9223372036854775807 | 2 years, 45 weeks, 2 days ago | 0|i1z667: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2013 | typos in zookeeperProgrammers |
Bug | Resolved | Trivial | Fixed | Tim Chambers | Tim Chambers | Tim Chambers | 19/Aug/14 18:12 | 30/Aug/14 14:48 | 20/Aug/14 13:21 | 3.5.1, 3.6.0 | documentation | 0 | 6 | 60 | 60 | 0% | I noticed a couple typos. See patch. | 0% | 0% | 60 | 60 | 9223372036854775807 | No Perforce job exists for this issue. | 1 | 9223372036854775807 | 5 years, 31 weeks ago | 0|i1z2hz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2012 | HBase client hangs after client-side OOM |
Bug | Patch Available | Minor | Unresolved | Qiang Tian | Qiang Tian | Qiang Tian | 15/Aug/14 00:08 | 24/Aug/14 00:13 | 0 | 3 | please see http://apache-hbase.679495.n3.nabble.com/HBase-client-hangs-after-client-side-OOM-td4062675.html. it looks the send thread caught the error successfully, as it is finally running fine.. but the cleanup fail to notify the main thread...so I suspect it is a very small timing hole that the packet is not on the 2 queues at the same time..it looks it could happen in the latest code ClientCnxnSocketNIO#doIO as well.. potential fixes: 1)add timeout during wait 2)try/catch for the possible timing hole: {code} if (!p.bb.hasRemaining()) { sentCount++; outgoingQueue.removeFirstOccurrence(p); if (p.requestHeader != null && p.requestHeader.getType() != OpCode.ping && p.requestHeader.getType() != OpCode.auth) { synchronized (pendingQueue) { pendingQueue.add(p); } } } {code} thoughts? thanks. |
412271 | No Perforce job exists for this issue. | 1 | 412258 | 5 years, 30 weeks, 4 days ago | 0|i1yx9b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2011 | move legacy zookeeper jar to dist-maven |
Improvement | Patch Available | Minor | Unresolved | Qiang Tian | Qiang Tian | Qiang Tian | 14/Aug/14 23:34 | 21/Aug/14 18:01 | build | 0 | 1 | Since most users do not care about the source code, it would be better to put the fat jar to disk-maven dir. also see the README.txt: {quote} zookeeper-<version>.jar - legacy jar file which contains all classes and source files. Prior to version 3.3.0 this was the only jar file available. It has the benefit of having the source included (for debugging purposes) however is also larger as a result {quote} |
412268 | No Perforce job exists for this issue. | 2 | 412255 | 5 years, 31 weeks, 6 days ago | 0|i1yx8n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2010 | Add logs when ZooKeeper is back running |
Improvement | Open | Minor | Unresolved | Unassigned | Benjamin Jaton | Benjamin Jaton | 14/Aug/14 14:27 | 14/Aug/14 14:27 | 3.4.6 | server | 1 | 3 | NIOServerCnxn produces "Zookeeper not running" logs, which is very useful. It would be also useful to have a way to know when Zookeeper has recovered from it and it running again. |
412151 | No Perforce job exists for this issue. | 0 | 412140 | 5 years, 32 weeks ago | 0|i1ywjb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2009 | zkCli does not execute command passed as arguments |
Bug | Resolved | Minor | Duplicate | Unassigned | Simon Cooper | Simon Cooper | 14/Aug/14 05:39 | 16/Aug/16 19:45 | 16/Aug/16 19:45 | 3.4.6 | 0 | 4 | ZOOKEEPER-1897 | In 3.4.5, zkCli executed commands passed on the command line. This command would create the {{/test}} znode and exit, with a non-zero exit code if the command failed: {code} $ ./zkCli.sh create /test null {code} This is no longer the case in 3.4.6 - the command is not executed, but zkCli still runs & exits with a zero exit code. The interim workaround in bash is to use here documents: {code} $ ./zkCli.sh <<EOF create /test null EOF {code} |
regression | 412043 | No Perforce job exists for this issue. | 0 | 412032 | 3 years, 31 weeks, 2 days ago | 0|i1yvvr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2008 | System test fails due to missing leader election port |
Bug | Resolved | Minor | Fixed | Kfir Lev-Ari | Kfir Lev-Ari | Kfir Lev-Ari | 12/Aug/14 05:20 | 30/Aug/14 14:47 | 14/Aug/14 03:37 | 3.5.0 | 3.5.1, 3.6.0 | contrib-fatjar | 0 | 4 | 0 | 0 | 0% | Leader election and client ports are not initialized when creating a QuorumServer during system tests. | 0% | 0% | 0 | 0 | 411455 | No Perforce job exists for this issue. | 2 | 411446 | 5 years, 31 weeks, 6 days ago |
Reviewed
|
0|i1yscf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2007 | Update RPM package to be relocatable and contrib packaging bugfix |
Bug | Resolved | Major | Won't Fix | Eric Yang | Eric Yang | Eric Yang | 08/Aug/14 17:37 | 03/Mar/16 11:23 | 03/Mar/16 11:23 | 3.4.6, 3.5.0 | 0 | 1 | ZOOKEEPER-1604 | RPM package init.d startup script is not relocatable, and there is some bugs in contrib directory build structure where property is not passed from main project to contrib, hence some of the contrib projects generate ${dist.dir} directory instead of building in the top level build directory. The usage of BUILD directory is not exactly correct from ZOOKEEPER-1210. RPM build procedure should use BUILDROOT as install destination to properly support RPM 4.6+ while package is building. | 410968 | No Perforce job exists for this issue. | 3 | 410961 | 4 years, 3 weeks ago | 0|i1ype7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2006 | Standalone mode won't take client port from dynamic config |
Bug | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 07/Aug/14 18:26 | 30/Aug/14 14:29 | 19/Aug/14 16:53 | 3.5.0 | 3.5.1, 3.6.0 | server | 0 | 5 | ZOOKEEPER-1997 | When clientPort is specified in the new format, using "server.x=host:port1:port2;clientPort" in either static or dynamic file and without a "clientPort = xxxx" statement, a standalone mode server doesn't set up client port. A second problem is that zkServer.sh looks for the client port in both static and dynamic files, but when looking in the static files it only looks for the "clientPort" statement, so if its specified in the new format the port will be missed and commands such as "zkServer.sh status" will not work. This is a problem for standalone mode, but also in distributed mode when the server is still LOOKING (once a leader is established and the server is LEADING/FOLLOWING/OBSERVING, a dynamic file is created and the client port will be found by the script). Review Board: https://reviews.apache.org/r/24786/ |
410735 | No Perforce job exists for this issue. | 8 | 410728 | 5 years, 31 weeks, 1 day ago | 0|i1ynz3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2005 | Failure to setCurrentEpoch on lead |
Bug | Open | Major | Unresolved | Unassigned | Ioannis Canellos | Ioannis Canellos | 07/Aug/14 10:10 | 26/Aug/15 16:46 | 3.4.6 | leaderElection | 0 | 3 | We are embedding the zookeeper server in our container and every now and then I see the exception below when running our integration tests suite. This is something that have never bother us before when using 3.4.5 but we do see in 3.4.6. When this occurs, the ensemble is not formed. java.io.IOException: Could not rename temporary file /data/zookeeper/0001/version-2/currentEpoch.tmp to /data/zookeeper/0001/version-2/currentEpoch at org.apache.zookeeper.common.AtomicFileOutputStream.close(AtomicFileOutputStream.java:82) at org.apache.zookeeper.server.quorum.QuorumPeer.writeLongToFile(QuorumPeer.java:1202) at org.apache.zookeeper.server.quorum.QuorumPeer.setCurrentEpoch(QuorumPeer.java:1223) at org.apache.zookeeper.server.quorum.Leader.lead(Leader.java:395) |
410583 | No Perforce job exists for this issue. | 1 | 410577 | 4 years, 30 weeks, 1 day ago | 0|i1yn1r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2004 | zkCli doesn't output command |
Bug | Resolved | Major | Duplicate | Unassigned | Nelson | Nelson | 05/Aug/14 03:47 | 10/Aug/16 15:32 | 10/Aug/16 15:32 | 3.4.6 | scripts | 1 | 6 | ZOOKEEPER-1897 | linux | Hi, With zookeeper 3.3.6, output is as expected (cf last line which returns the result of ls / {code} nelson@nelson-laptop (0) $ ./zookeeper-3.3.6/bin/zkCli.sh -server 127.0.0.1:2181 ls / Connecting to 127.0.0.1:2181 .... LOGS .... 2014-08-04 16:22:53,032 - INFO [main:Environment@97] - Client environment:user.name=nelson 2014-08-04 16:22:53,032 - INFO [main:Environment@97] - Client environment:user.home=/home/nelson 2014-08-04 16:22:53,033 - INFO [main:Environment@97] - Client environment:user.dir=/home/nelson/git/ 2014-08-04 16:22:53,035 - INFO [main:ZooKeeper@379] - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@75af8109 2014-08-04 16:22:53,056 - INFO [main-SendThread():ClientCnxn$SendThread@1058] - Opening socket connection to server /127.0.0.1:2181 2014-08-04 16:22:53,158 - INFO [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@947] - Socket connection established to127.0.0.1:2181, initiating session 2014-08-04 16:22:53,216 - INFO [main-SendThread(127.0.0.1:2181):ClientCnxn$SendThread@736] - Session establishment complete on server 127.0.0.1:2181, sessionid = 0x147a10f7d02005a, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [kafka, zookeeper, mesos, marathon, chronos] nelson@nelson-laptop (0) $ {code} With zookeeper 3.4.6 no output {code} nelson@nelson-laptop (0) $ ./zookeeper-3.4.6/bin/zkCli.sh -server 127.0.0.1:2181 ls / Connecting to 127.0.0.1:2181 .... LOGS .... 2014-08-04 16:22:56,480 [myid:] - INFO [main:Environment@100] - Client environment:user.name=nelson 2014-08-04 16:22:56,480 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/home/nelson 2014-08-04 16:22:56,480 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/home/nelson/git/ 2014-08-04 16:22:56,481 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@87d53d8 nelson@nelson-laptop (0) $ {code} |
409987 | No Perforce job exists for this issue. | 0 | 409981 | 3 years, 32 weeks, 1 day ago | zkcli | 0|i1yjev: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2003 | Missing fsync() on the logs parent directory |
Bug | Open | Major | Unresolved | Unassigned | Samer Al-Kiswany | Samer Al-Kiswany | 05/Aug/14 03:04 | 05/Aug/14 13:49 | 3.4.6 | 0 | 4 | After studying the steps ZooKeeper takes to update the logs we found the following bug. The bug may not manifest in the current file system implementations, but it violates the POSIX recommendations and may be an issue in some file systems. Looking at the strace of zookeeper we see the following: mkdir(v) create(v/log) append(v/log) trunk(v/log) write(v/log) fdatasync(v/log) Although the data is fdatasynced to the log, the parent directory was never fsynced, consequently in case of a crash, the parent directory or the log file may be lost, as the parent directory and file metadata were never persisted on disk. To be safe, both the log directory, and parent directory of the log directory should be fsynced as well. |
409977 | No Perforce job exists for this issue. | 0 | 409971 | 5 years, 33 weeks, 2 days ago | 0|i1yjcn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2002 | Host |
Bug | Resolved | Major | Invalid | Unassigned | Ruel Balabis | Ruel Balabis | 03/Aug/14 04:06 | 03/Aug/14 04:19 | 03/Aug/14 04:19 | 0 | 1 | 2246400 | 2246400 | 0% | 0% | 0% | 2246400 | 2246400 | 409623 | No Perforce job exists for this issue. | 0 | 409618 | 5 years, 33 weeks, 4 days ago | 0|i1yh7z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2001 | Please provide a parse method with input stream as parameter |
Improvement | Open | Major | Unresolved | Unassigned | Suresh Mathew | Suresh Mathew | 31/Jul/14 18:37 | 21/Aug/14 18:38 | 3.4.6 | 0 | 2 | org.apache.zookeeper.server.quorum.QuorumPeerConfig has the following two methods. 1.public void parse(String path) throws ConfigException 2. public void parseProperties(Properties zkProp) It would be great if you could please add a wrapper to take an input stream. In the first method, half way through it becomes a file input stream. So I assume its fairly easy to add this wrapper. The reason is most applications will be getting a stream with the classloader help. |
409271 | No Perforce job exists for this issue. | 0 | 409267 | 5 years, 31 weeks ago | 0|i1yf3b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-2000 | ZOOKEEPER-2135 Fix ReconfigTest.testPortChange |
Sub-task | Open | Minor | Unresolved | Alexander Shraer | Alexander Shraer | Alexander Shraer | 31/Jul/14 02:49 | 05/Feb/20 07:16 | 3.5.0 | 3.7.0, 3.5.8 | tests | 0 | 6 | ZOOKEEPER-2137, LOGGING-160 | testPortChange changes all ports and role of the server and thus causes existing clients to disconnect, while this wouldn't happen if only the client port changes. Need to fix it to only change client port and not all the other parameters and make sure that the clients don't disconnect, while new clients shouldn't be able to connect to the old port. |
409074 | No Perforce job exists for this issue. | 2 | 409070 | 5 years, 1 week, 5 days ago | 0|i1ydvr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1999 | Converting CRLF to LF in DynamicConfigBackwardCompatibilityTest |
Bug | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 30/Jul/14 14:09 | 01/Aug/14 17:03 | 01/Aug/14 10:59 | 3.5.0 | 3.5.0 | 0 | 3 | ZOOKEEPER-1992 | The gitattributes set java files line ending be LF. The DynamicConfigBackwardCompatibilityTest.java uses CRLF and should be converted to LF. |
408924 | No Perforce job exists for this issue. | 1 | 408922 | 5 years, 33 weeks, 6 days ago | 0|i1ycz3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1998 | C library calls getaddrinfo unconditionally from zookeeper_interest |
Bug | Open | Major | Unresolved | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 29/Jul/14 15:27 | 14/Dec/19 06:07 | 3.5.0 | 3.7.0 | c client | 1 | 13 | 0 | 3000 | MESOS-2681, ZOOKEEPER-2965 | (commented this on ZOOKEEPER-338) I've just noticed that we call getaddrinfo from zookeeper_interest... on every call. So from zookeeper_interest we always call update_addrs: https://github.com/apache/zookeeper/blob/trunk/src/c/src/zookeeper.c#L2082 which in turns unconditionally calls resolve_hosts: https://github.com/apache/zookeeper/blob/trunk/src/c/src/zookeeper.c#L787 which does the unconditional calls to getaddrinfo: https://github.com/apache/zookeeper/blob/trunk/src/c/src/zookeeper.c#L648 We should fix this since it'll make 3.5.0 slower for people relying on DNS. I think this is happened as part of ZOOKEEPER-107 in which the list of servers can be updated. cc: [~shralex], [~phunt], [~fpj] |
100% | 100% | 3000 | 0 | pull-request-available | 408654 | No Perforce job exists for this issue. | 0 | 408652 | 29 weeks ago | 0|i1ybc7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1997 | server with a single line server list shouldn't be StandaloneEnabled |
Bug | Resolved | Major | Duplicate | Unassigned | Hongchao Deng | Hongchao Deng | 28/Jul/14 16:22 | 14/Aug/14 18:48 | 14/Aug/14 18:48 | 0 | 4 | ZOOKEEPER-1992, ZOOKEEPER-2006 | A server goes to standalone mode if there is only a single server line in server list description. In fact, the server line was ignored. The test [testStandaloneQuorum|https://github.com/apache/zookeeper/blob/fa6d21fa8a8acba812237538e4f7172faf969d37/src/java/test/org/apache/zookeeper/test/StandaloneTest.java#L64] was incorrectly successful before -- the client port was ignored and the server was responding through the jetty port. When I do a client port check, it failed. This is caused by the logic in [checkvalidity|https://github.com/apache/zookeeper/blob/4bb76bd22916de8dcfe0c40f649d02d61737e871/src/java/main/org/apache/zookeeper/server/quorum/QuorumPeerConfig.java#L439]: {code} if (numMembers > 1 || (!standaloneEnabled && numMembers > 0)) { ... {code} This would assume it's standaloneEnabled mode and won't take anything in server list where the client port is defined as introduced in 3.5 dynamic config format. This is undesired after introducing reconfig because a cluster could set up one server and then add more later. |
408373 | No Perforce job exists for this issue. | 0 | 408375 | 5 years, 34 weeks, 2 days ago | 0|i1y9of: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1996 | Incorrect statement in documentation |
Bug | Resolved | Major | Duplicate | Unassigned | Dmitry Sivachenko | Dmitry Sivachenko | 28/Jul/14 09:17 | 18/Nov/15 18:27 | 18/Nov/15 18:27 | 3.4.6 | documentation | 1 | 2 | ZOOKEEPER-2243, ZOOKEEPER-1509 | FreeBSD | Documentation contains the following warning about FreeBSD: http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html#sc_systemReq ------- FreeBSD is supported as a development and production platform for clients only. Java NIO selector support in the FreeBSD JVM is broken. ------- I believe it is outdated info from pre-OpenJDK time. With recent OpenJDK-7 I am running Zookeeper in production without any problems and I asked other people who run it on FreeBSD, they also experience no trouble. I propose to remove this information and list FreeBSD as supported platform unless you know something bad in particular. |
408265 | No Perforce job exists for this issue. | 0 | 408269 | 4 years, 18 weeks, 1 day ago | 0|i1y90v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1995 | ZOOKEEPER-1992 Safely remove client port in old config file on reconfig itself |
Sub-task | Resolved | Major | Duplicate | Hongchao Deng | Hongchao Deng | Hongchao Deng | 27/Jul/14 22:09 | 01/Aug/14 01:47 | 01/Aug/14 01:47 | 3.5.0 | 3.5.0 | 0 | 1 | 1. check on reconfig the clientPort field. 2. |
408181 | No Perforce job exists for this issue. | 0 | 408185 | 5 years, 34 weeks, 3 days ago | 0|i1y8i7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1994 | Backup config files. |
Improvement | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 27/Jul/14 22:08 | 30/Aug/14 14:29 | 14/Aug/14 02:19 | 3.5.0 | 3.5.1, 3.6.0 | server | 0 | 5 | We should create a backup file for a static or dynamic configuration file before changing the file. Since the static file is changed at most twice (once when removing the ensemble definitions, at which point a dynamic file doesn't exist yet, and once when removing clientPort information) its probably fine to back up the static file independently from the dynamic file. To track backup history: Option 1: we could have a .bakXX extention for backup where XX is a sequence number. Option 2: have the configuration version be part of the file name for dynamic configuration files (instead of in the file like now). Such as zoo_replicated1.cfg.dynamic.1000000 then on reconfiguration simply create a new dynamic file (with new version) and update the link in the static file to point to the new dynamic one. Review place: https://reviews.apache.org/r/24208/ |
408180 | No Perforce job exists for this issue. | 12 | 408184 | 5 years, 32 weeks ago |
Reviewed
|
0|i1y8hz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1993 | ZOOKEEPER-1992 Keep the client port upon parsing config |
Sub-task | Resolved | Blocker | Duplicate | Hongchao Deng | Hongchao Deng | Hongchao Deng | 27/Jul/14 22:07 | 01/Aug/14 01:46 | 01/Aug/14 01:46 | 3.5.0 | 3.5.0 | 0 | 3 | 1. Current implementation ignored and removed "clientPort" on parsing. For the sake of backward compatibility, "clientPort" should be kept and used upon parsing config on fresh boot. 2. When getting clientPort from both the old config and dynamic file, the one in dynamic file is of higher priority. 3. When "dynamicConfigFile" is set in zoo.cfg and not empty, standalone mode will be disabled. Review board: https://reviews.apache.org/r/24074/ |
408179 | No Perforce job exists for this issue. | 4 | 408183 | 5 years, 34 weeks, 1 day ago | 0|i1y8hr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1992 | backward compatibility of zoo.cfg |
Bug | Resolved | Blocker | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 27/Jul/14 22:05 | 02/Aug/14 15:00 | 01/Aug/14 01:43 | 3.5.0 | 3.5.0 | 0 | 4 | ZOOKEEPER-1993, ZOOKEEPER-1995 | ZOOKEEPER-1989, ZOOKEEPER-1997, ZOOKEEPER-1999 | This issue supersedes our discussion in ZOOKEEPER-1989. To summarize, ZK users can seamlessly upgrade 3.4 to 3.5. But two things will happen: 1. the server list will be separated out as a dynamic file (the original should be backup automatically). 2. Client port is mandatory on reconfig. So when reconfig the server itself (its id), the client port in config file will be removed and replaced by the one in reconfig (written in dynamic file). |
408178 | No Perforce job exists for this issue. | 11 | 408182 | 5 years, 33 weeks, 5 days ago |
Reviewed
|
0|i1y8hj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1991 | zkServer.sh returns with a zero exit status when a ZooKeeper process is already running |
Bug | Closed | Minor | Fixed | Biju Nair | Reed Wanderman-Milne | Reed Wanderman-Milne | 25/Jul/14 21:29 | 21/Jul/16 16:18 | 02/Mar/16 13:33 | 3.4.6 | 3.5.2, 3.6.0 | scripts | 0 | 5 | If ZooKeeper is started with zkServer.sh, and an error is shown that a ZooKeeper process is already running, the command returns with an exit status of 0, while it should end with a non-zero exit status. Example: $ bin/zkServer.sh start JMX enabled by default Using config: /home/reed/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... already running as process 25063. $ echo $? 0 This can make it difficult for automated scripts to check if creating a new ZooKeeper process was successful, as it won't catch if a user accidentally started it before. |
408069 | No Perforce job exists for this issue. | 1 | 408076 | 4 years, 3 weeks, 1 day ago | 0|i1y7tz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1990 | suspicious instantiation of java Random instances |
Bug | Closed | Critical | Fixed | Norbert Kalmár | Patrick D. Hunt | Patrick D. Hunt | 25/Jul/14 19:33 | 20/May/19 13:51 | 10/Sep/18 05:45 | 3.5.0 | 3.6.0, 3.5.5 | 0 | 8 | 0 | 2400 | It's not clear to me why we are doing this, but it looks very suspicious. Why aren't we just calling "new Random()" in these cases? (even for the tests I don't really see it - typically that would just be for repeatability) {noformat} ag "new Random[ \t]*\(" . src/java/main/org/apache/zookeeper/ClientCnxn.java 817: private Random r = new Random(System.nanoTime()); src/java/main/org/apache/zookeeper/client/StaticHostProvider.java 75: sourceOfRandomness = new Random(System.currentTimeMillis() ^ this.hashCode()); 98: sourceOfRandomness = new Random(randomnessSeed); src/java/main/org/apache/zookeeper/server/quorum/AuthFastLeaderElection.java 420: rand = new Random(java.lang.Thread.currentThread().getId() src/java/main/org/apache/zookeeper/server/SyncRequestProcessor.java 64: private final Random r = new Random(System.nanoTime()); src/java/main/org/apache/zookeeper/server/ZooKeeperServer.java 537: Random r = new Random(id ^ superSecret); 554: Random r = new Random(sessionId ^ superSecret); src/java/test/org/apache/zookeeper/server/quorum/WatchLeakTest.java 271: Random r = new Random(SESSION_ID ^ superSecret); src/java/test/org/apache/zookeeper/server/quorum/CommitProcessorTest.java 151: Random rand = new Random(Thread.currentThread().getId()); 258: Random rand = new Random(Thread.currentThread().getId()); 288: Random rand = new Random(Thread.currentThread().getId()); src/java/test/org/apache/zookeeper/test/StaticHostProviderTest.java 40: private Random r = new Random(1); {noformat} |
100% | 100% | 2400 | 0 | pull-request-available | 408058 | No Perforce job exists for this issue. | 0 | 408066 | 1 year, 27 weeks, 3 days ago | 0|i1y7rr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1989 | ZOOKEEPER-1987 backward compatibility of zoo.cfg |
Sub-task | Resolved | Blocker | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 24/Jul/14 13:38 | 01/Aug/14 10:59 | 01/Aug/14 10:59 | 3.5.0 | 3.5.0 | tests | 0 | 2 | ZOOKEEPER-1992 | Before 3.5.x, users define zoo.cfg with "clientPort" parameter which is used to identify on which port the server is serving clients. After upgrading to 3.5.x, the new format: {noformat} server.$id=$addr:$port1:$port2[:$role];[$cliAddr:]$cliPort {noformat} force users to define all the client ports on the entire ZK ensemble. The goal of this issue is to preserve backward compatibility upgrading 3.4 to 3.5. 1. when a user defines an old-style config file, it should function the same as the old way -- It should use clientPort variable and shouldn't create a dynamic file. 2. when a user with old-style config file tries to do reconfig relevant jobs, it should stop him and give out a warning. |
407582 | No Perforce job exists for this issue. | 0 | 407596 | 5 years, 34 weeks, 6 days ago | 0|i1y4wf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1988 | ZOOKEEPER-1987 new test patch to verify dynamic reconfig backward compatibility |
Sub-task | Resolved | Major | Fixed | Alexander Shraer | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 24/Jul/14 13:26 | 26/Jul/14 07:25 | 25/Jul/14 22:16 | 3.5.0 | tests | 0 | 5 | 407576 | No Perforce job exists for this issue. | 5 | 407590 | 5 years, 34 weeks, 5 days ago | 0|i1y4v3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1987 | unable to restart 3 node cluster |
Bug | Resolved | Blocker | Fixed | Alexander Shraer | Patrick D. Hunt | Patrick D. Hunt | 23/Jul/14 20:02 | 07/Nov/14 12:25 | 07/Nov/14 12:25 | 3.5.0 | 3.5.1 | tests | 0 | 6 | ZOOKEEPER-1660, ZOOKEEPER-1988, ZOOKEEPER-1989 | ZOOKEEPER-1950 | I tried a fairly simple test, start a three node cluster, bring it down, then restart it. On restart the servers elect the leader and send updates, however the negotiation never completes - the client ports are never bound for example. | 407380 | No Perforce job exists for this issue. | 7 | 407394 | 5 years, 35 weeks ago | 0|i1y3nr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1986 | refactor log trace on touchSession |
Improvement | Resolved | Minor | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 23/Jul/14 17:22 | 25/Jul/14 07:25 | 24/Jul/14 18:59 | 3.5.0 | 0 | 2 | The previous log trace has a minor mistake after applying ZOOKEEPER-1982: It will show "invalidsession" and "closingsession". There should be a whitespace in between. I might wanna further refactor the log wrapper function by using MessageFormat, which would be cleaner. |
407342 | No Perforce job exists for this issue. | 1 | 407356 | 5 years, 34 weeks, 6 days ago |
Reviewed
|
0|i1y3fb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1985 | Memory leak in C client |
Bug | Open | Major | Unresolved | desmondhe | desmondhe | desmondhe | 22/Jul/14 22:22 | 05/Feb/20 07:16 | 3.4.6 | 3.7.0, 3.5.8 | c client | 0 | 4 | in the file zookeeper.c, most function call of "close_buffer_oarchive(&oa, 0)" shoud been instead by close_buffer_oarchive(&oa, rc < 0 ? 1 : 0); |
407142 | No Perforce job exists for this issue. | 1 | 407158 | 1 year, 17 weeks ago | 0|i1y27b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1984 | testLeaderTimesoutOnNewQuorum is a flakey test |
Bug | Resolved | Major | Fixed | Alexander Shraer | Patrick D. Hunt | Patrick D. Hunt | 22/Jul/14 17:37 | 24/Jul/14 07:22 | 23/Jul/14 12:37 | 3.5.0 | 3.5.0 | tests | 0 | 3 | I'm seeing intermittent failures in testLeaderTimesoutOnNewQuorum It's failing both on jdk6 and jdk7. (this is my personal jenkins, I haven't see any other failures than this during the past few days). {noformat} junit.framework.AssertionFailedError at org.apache.zookeeper.test.ReconfigTest.testServerHasConfig(ReconfigTest.java:127) at org.apache.zookeeper.test.ReconfigTest.testLeaderTimesoutOnNewQuorum(ReconfigTest.java:450) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) {noformat} |
407056 | No Perforce job exists for this issue. | 1 | 407073 | 5 years, 35 weeks ago |
Reviewed
|
0|i1y1of: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1983 | Append to zookeeper.out (not overwrite) to support logrotation |
Bug | In Progress | Major | Unresolved | Shyamal Prasad | Shyamal Prasad | Shyamal Prasad | 22/Jul/14 17:18 | 05/Feb/20 07:11 | 3.3.5, 3.3.6, 3.4.6 | 3.7.0, 3.5.8 | server | 1 | 4 | CentOS 5.x (and probably any Linux distribution for that matter) | Currently zkServer.sh will redirect output to zookeeper.out using a simple shell redirect. When logrotate (and similar tools) are used to rotate the zookeeper.out file with the 'copytruncate' semantics (copy the file, truncate it to zero bytes) the next write results in a sparse file with the write at the offset of the last file. Effectively the log file is now full a null bytes and it is hard to read/use the file (and the rotated copies). Even worse, the result is zookeeper.out file only gets "larger" (though sparse) and after a while on a chatty system it takes significant CPU resources to compress the file (which is all nulls!) The simple fix is to append to the file (>>) instead of a simple redirection (>) This issue was found in a 3.3.5 production system, however code in trunk has the same issue. |
407047 | No Perforce job exists for this issue. | 2 | 407065 | 4 years, 48 weeks, 2 days ago | 0|i1y1mn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1982 | Refactor (touch|add)Session in SessionTrackerImpl.java |
Improvement | Resolved | Minor | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 22/Jul/14 16:07 | 24/Jul/14 07:22 | 23/Jul/14 14:17 | 3.5.0 | 0 | 3 | ZOOKEEPER-1978 | This JIRA extends the idea of ZOOKEEPER-1978. Besides refactoring get-put operations of concurrentMap in addSession method, addSession also calls touchSession which repeatedly checks if session existed. So it would be nice for refactor. Refactoring the second issue is relevant to ZOOKEEPER-1978. So I create a this JIRA to fix both. |
407029 | No Perforce job exists for this issue. | 3 | 407047 | 5 years, 35 weeks ago |
Reviewed
|
0|i1y1in: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1981 | ZOOKEEPER-1970 Fix Dodgy Code Warnings identified by findbugs 2.0.3 |
Sub-task | Resolved | Minor | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 21/Jul/14 12:45 | 22/Jul/14 07:31 | 22/Jul/14 00:51 | 3.5.0 | 0 | 3 | There are two cases: 1. a duplicate check of null bytes. 2. a lot switch statement without default case. For the default case, I suggest 1. throwing an exception as a way to break the program. It's highly unexpected. 2. or LOG.warn it. I am doing the second right now to keep the original. |
406694 | No Perforce job exists for this issue. | 1 | 406714 | 5 years, 35 weeks, 2 days ago |
Reviewed
|
0|i1xzh3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1980 | how to draw the figure"ZooKeeper Throughput as the Read-Write Ratio Varies" ? |
Test | Resolved | Major | Done | Unassigned | xinyanzhang | xinyanzhang | 21/Jul/14 03:31 | 28/Jul/14 12:57 | 28/Jul/14 12:57 | 3.4.6 | 3.4.6 | tests | 0 | 2 | 972000 | 972000 | 0% | Each server has one Xeon dual-core 2.4GHz processor, 4GB of RAM, gigabit Ethernet, and two SATA drives. The Linux Kernel version is 2.6.18-164.11.1.el5_lustre.1.8.3, the operating system is Red Hat Enterprise Linux Server release 5.4 (Tikanga), and the java version is 1.7.0_51. | I want to know how to draw the figure "ZooKeeper Through as the Read-Write Ratio Varies" and " Reliability in the Presence of Errors" . is it produced by the benchmarking tools provide by the Computer Science department of Brown University? |
0% | 0% | 972000 | 972000 | 406561 | No Perforce job exists for this issue. | 0 | 406581 | 5 years, 34 weeks, 3 days ago | 0|i1xynr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1979 | ZOOKEEPER-1970 Fix Performance Warnings found by Findbugs 2.0.3 |
Sub-task | Resolved | Minor | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 20/Jul/14 18:30 | 22/Jul/14 07:31 | 22/Jul/14 00:46 | 3.5.0 | 0 | 3 | findbugs complains that {code} new Integer(cnxToValue); {code} should be changed to {code} Integer.parseInt(cnxToValue); {code} |
406526 | No Perforce job exists for this issue. | 1 | 406546 | 5 years, 35 weeks, 2 days ago |
Reviewed
|
0|i1xyfz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1978 | ZOOKEEPER-1970 Fix Multithreaded correctness Warnings |
Sub-task | Resolved | Minor | Duplicate | Hongchao Deng | Hongchao Deng | Hongchao Deng | 20/Jul/14 18:28 | 23/Jul/14 14:18 | 23/Jul/14 14:18 | 3.5.0 | 0 | 3 | ZOOKEEPER-1982 | findbugs is complaining {code} if (sessionsById.get(id) == null) { SessionImpl s = new SessionImpl(id, sessionTimeout); sessionsById.put(id, s); } {code} is not atomic for the gap between get() and put(). I suggest using putIfAbsent() instead. |
406525 | No Perforce job exists for this issue. | 1 | 406545 | 5 years, 35 weeks, 1 day ago | 0|i1xyfr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1977 | Calibrate initLimit dynamically |
Wish | Open | Major | Unresolved | Unassigned | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 20/Jul/14 14:32 | 11/Aug/14 13:33 | 2 | 7 | We have seen a number of times users failing to get an ensemble up because the snapshot transfer times out. We should be able to do better than this and calibrate initLimit dynamically. I was thinking concretely that we could have servers increasing the initLimit value (e.g., doubling or increments of 1) upon socket timeouts. The tricky part here is that we need both ends of the communication to increase it. | 406515 | No Perforce job exists for this issue. | 0 | 406535 | 5 years, 32 weeks, 3 days ago | 0|i1xydj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1976 | address internationalization issues identified by findbugs 2.0.3 |
Bug | Open | Major | Unresolved | Unassigned | Patrick D. Hunt | Patrick D. Hunt | 19/Jul/14 23:01 | 19/Jun/17 06:33 | 3.5.0, 3.4.11 | server | 0 | 2 | ZOOKEEPER-1975 | Findbugs 2.0.3 found a number of internationalization issues with the code, we ignored these for the time being in ZOOKEEPER-1975, however we should address them one by one eventually. | 406495 | No Perforce job exists for this issue. | 0 | 406515 | 2 years, 39 weeks, 4 days ago | 0|i1xy93: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1975 | ZOOKEEPER-1970 Turn off "internationalization warnings" in findbugs exclude file |
Sub-task | Resolved | Major | Fixed | Patrick D. Hunt | Patrick D. Hunt | Patrick D. Hunt | 19/Jul/14 22:53 | 21/Jul/14 14:57 | 21/Jul/14 14:01 | 3.5.0 | 3.5.0 | server | 0 | 3 | ZOOKEEPER-1976 | A more recent version of findbugs (2.0.3) added some checks - one is for internationalization issues. We should fix these, but for the time being we'll ignore them. (I'll also create a separate jira to address this in future) | 406494 | No Perforce job exists for this issue. | 2 | 406514 | 5 years, 35 weeks, 3 days ago |
Reviewed
|
0|i1xy8v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1974 | winvs2008 jenkins job failing with "unresolved external symbol" |
Bug | Resolved | Blocker | Fixed | Flavio Paiva Junqueira | Patrick D. Hunt | Patrick D. Hunt | 19/Jul/14 12:57 | 25/Jul/14 13:39 | 25/Jul/14 12:24 | 3.5.0 | 3.5.0 | c client | 0 | 4 | The winvs2008 build is failing with bq. unresolved external symbol __imp__ZOO_READONLY_STATE see: https://builds.apache.org/view/S-Z/view/ZooKeeper/job/ZooKeeper-trunk-WinVS2008/1445/console |
406468 | No Perforce job exists for this issue. | 1 | 406488 | 5 years, 34 weeks, 6 days ago |
Reviewed
|
0|i1xy33: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1973 | Jetty Server changes broke ibm6 support |
Bug | Resolved | Major | Fixed | Bill Havanki | Patrick D. Hunt | Patrick D. Hunt | 19/Jul/14 12:50 | 21/Jul/14 14:57 | 21/Jul/14 13:59 | 3.5.0 | 3.5.0 | server | 0 | 4 | The recent Jetty Server additions ZOOKEEPER-1346 have broken ibm6 support, can someone take a look? (we've had issues like this in the past - typically due to using specialized/sun classes that don't exist in that jdk) https://builds.apache.org/job/ZooKeeper-trunk-ibm6/556/ |
406467 | No Perforce job exists for this issue. | 1 | 406487 | 5 years, 35 weeks, 3 days ago |
Reviewed
|
0|i1xy2v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1972 | ZOOKEEPER-1970 Fix invalid volatile long/int increment (++) |
Sub-task | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 18/Jul/14 19:44 | 20/Jul/14 07:24 | 19/Jul/14 22:51 | 3.5.0 | 3.5.0 | 0 | 3 | The findbugs complains about incrementing a volatile variable in [AuthFastLeaderElection|https://github.com/apache/zookeeper/blob/087abf83684167ae56864fe4c3be0079fa653266/src/java/main/org/apache/zookeeper/server/quorum/AuthFastLeaderElection.java#L737] and [FastLeaderElection|https://github.com/apache/zookeeper/blob/087abf83684167ae56864fe4c3be0079fa653266/src/java/main/org/apache/zookeeper/server/quorum/FastLeaderElection.java]: {code} volatile long logicalclock; /* Election instance */ ... logicalclock++; {code} Actually this is a bug. It should use AtomicLong here instead of volatile. Leader.java and [QuorumPeer.java|https://github.com/apache/zookeeper/blob/087abf83684167ae56864fe4c3be0079fa653266/src/java/main/org/apache/zookeeper/server/quorum/QuorumPeer.java#L428] and LearnerHandler.java: {code} volatile int tick; {code} I don't think it needs volatile here. The tick is incremented only in [Leader.java|https://github.com/apache/zookeeper/blob/087abf83684167ae56864fe4c3be0079fa653266/src/java/main/org/apache/zookeeper/server/quorum/Leader.java#L590]: {code} synchronized (this) { ... if (!tickSkip) { self.tick++; } } {code} and it's protected by the synchronized statement. I just remove volatile keyword. |
406410 | No Perforce job exists for this issue. | 4 | 406430 | 5 years, 35 weeks, 4 days ago | Awesome! thanks Hongchao! Committed to trunk |
Reviewed
|
0|i1xxq7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1971 | Make JMX remote monitoring port configurable |
Improvement | Patch Available | Major | Unresolved | Mohammad Arshad | Biju Nair | Biju Nair | 18/Jul/14 17:33 | 05/Feb/20 07:11 | 3.5.0 | 3.7.0, 3.5.8 | server | 0 | 6 | ZOOKEEPER-1948 | All | This is a follow-up item from ZOOKEEPER-1948. | 406385 | No Perforce job exists for this issue. | 6 | 406405 | 1 year, 45 weeks ago | 0|i1xxkn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1970 | Fix Findbugs Warnings |
Improvement | Resolved | Major | Implemented | Hongchao Deng | Hongchao Deng | Hongchao Deng | 18/Jul/14 13:46 | 23/Jul/14 14:19 | 23/Jul/14 14:19 | 3.5.0 | 0 | 3 | ZOOKEEPER-1972, ZOOKEEPER-1975, ZOOKEEPER-1978, ZOOKEEPER-1979, ZOOKEEPER-1981 | The findbugs complained a lot of warnings after upgrade: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2191//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html#Warnings_I18N It would be great to get those warnings settled before 3.5.0 release. My proposal is: 1. Ignore those "Internationalization Warnings" which is related to encoding. And create a new JIRA to fix encoding later. 2. fix warnnings of: * Multithreaded correctness Warnings * Performance Warnings * Dodgy code Warnings |
406354 | No Perforce job exists for this issue. | 0 | 406375 | 5 years, 35 weeks, 1 day ago | 0|i1xxdz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1969 | Fix Port Already In Use for JettyAdminServerTest |
Bug | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 18/Jul/14 12:47 | 18/Jul/14 13:56 | 18/Jul/14 13:05 | 3.5.0 | 0 | 4 | The test is failiing: {code} failed SocketConnector@0.0.0.0:8080: java.net.BindException: Address already in use {code} I tried to assign a unique port in test and Jenkins build comes normal again. |
406340 | No Perforce job exists for this issue. | 1 | 406361 | 5 years, 35 weeks, 6 days ago |
Reviewed
|
0|i1xxb3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1968 | Make Jetty dependencies optional in ivy.xml |
Improvement | Resolved | Major | Fixed | Bill Havanki | Patrick D. Hunt | Patrick D. Hunt | 17/Jul/14 20:24 | 19/Jul/14 07:24 | 18/Jul/14 13:29 | 3.5.0 | 3.5.0 | server | 0 | 3 | ZOOKEEPER-1346 | Should we make the jetty/jackson libraries optional in ivy.xml given that Jetty Server is itself optional (an uses reflection to boot). | 406197 | No Perforce job exists for this issue. | 1 | 406218 | 5 years, 35 weeks, 5 days ago |
Reviewed
|
0|i1xwfj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1967 | Eliminate the temp dynamic config file, find last proposed config in transaction log. |
Improvement | Open | Major | Unresolved | Unassigned | Alexander Shraer | Alexander Shraer | 16/Jul/14 16:46 | 18/Jul/14 13:56 | quorum, server | 0 | 3 | The .next temporary config file is created when a server acks a reconfig proposal. During reconfig commit this file becomes the permanent dynamic config file. This temp file is read (if exists) during server boot to determine whether there is a reconfig potentially in progress. This info is also available in the transaction log, since reconfig is a transaction. Initially I chose not to take this information from the transaction log, mainly for simplicity, since I believed that we need the last proposed reconfig info before we're processing the transaction log (for example, if we'd like to contact new config servers during FLE - this is discussed in ZOOKEEPER-1807). It would be useful to revisit this issue and check whether we could eliminate the temporary .next dynamic config file, finding the last proposed reconfig in the the transaction log. Note that a bulk of the work here will be modifying ReconfigRecoveryTest, which uses .next files to start a server in a state where it thinks it crashed in a middle of a reconfig. |
405878 | No Perforce job exists for this issue. | 0 | 405898 | 5 years, 36 weeks, 1 day ago | 0|i1xuhj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1966 | VS and line breaks |
Bug | Resolved | Major | Fixed | Orion Hodson | Orion Hodson | Orion Hodson | 15/Jul/14 09:52 | 17/Jul/14 13:23 | 16/Jul/14 12:01 | 3.5.0 | 3.5.0 | c client | 0 | 5 | ZOOKEEPER-1953 | Windows, Visual Studio 2013 Build | The patch attached to https://issues.apache.org/jira/browse/ZOOKEEPER-1953 has caused problems for git users when committed from SVN. The attached patch simply changes the line endings of the offending files from CRLF to LF in the hope that when they are committed to SVN, the LF line endings end up as the canonical representation in the Apache ZooKeeper git repo. An interpretation of what's happened here is that svn has stored the CRLF line endings and these have been pushed into git by git-svn as described here: http://blog.subgit.com/line-endings-handling-in-svn-git-and-subgit/. Git clients are then confused as the text files have an unexpected representation in the repo. Experimentally VS is indifferent to line endings – ran dos2unix on the vcxproj and sln files and VS opened and closed the files without modifying them. This page seems to advertise the indifference to line endings and discusses selecting custom options: http://msdn.microsoft.com/en-us/library/dd409797.aspx. |
405444 | No Perforce job exists for this issue. | 2 | 405469 | 5 years, 36 weeks ago | 0|i1xruf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1965 | Install could not be done on powerpc Error: Unrecognized opcode: `lock' |
Bug | Open | Major | Unresolved | Unassigned | Sonny Yang | Sonny Yang | 14/Jul/14 22:36 | 14/Jul/14 22:36 | 3.4.5 | c client | 0 | 1 | powerpc64 Linux version 2.6.32-220.el6.ppc64 (mockbuild@ppc-004.build.bos.redhat.com) (gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Wed Nov 9 08:02:37 EST 2011 | after (ant compile_jute) and(cd src/c/ ./configure), make can not be done! It contains error: /tmp/ccyJ6new.s: Assembler messages: /tmp/ccyJ6new.s:67: Error: Unrecognized opcode: `lock' /tmp/ccyJ6new.s:102: Error: Unrecognized opcode: `lock' /tmp/ccyJ6new.s:431: Error: Unrecognized opcode: `lock' /tmp/ccyJ6new.s:464: Error: Unrecognized opcode: `lock' make[1]: *** [libzkmt_la-mt_adaptor.lo] Error 1 make[1]: Leaving directory `/gpfs/ibmu/sjtupower/rawdep/zookeeper-3.4.5/src/c' make: *** [all] Error 2 I don't know how to fix it |
405341 | No Perforce job exists for this issue. | 0 | 405368 | 5 years, 36 weeks, 2 days ago | 0|i1xr8n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1964 | Fix Flaky Test in ReconfigTest.java |
Bug | Resolved | Minor | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 11/Jul/14 19:20 | 15/Jul/14 07:20 | 14/Jul/14 13:46 | 3.5.0 | 0 | 5 | There is flaky tests in ReconfigTest showing something like: {code} junit.framework.AssertionFailedError: Mismatches ElectionAddress! expected:<[127.0.0.1]:12369> but was:<[localhost]:12369> at org.apache.zookeeper.test.ReconfigTest.assertRemotePeerMXBeanAttributes(ReconfigTest.java:967) at org.apache.zookeeper.test.ReconfigTest.testJMXBeanAfterRemoveAddOne(ReconfigTest.java:809) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) {code} Basically, the problem is that there might be inconsistency between numerical ip and literal ip. Converting both sides to one (numerical IP) will fix it. |
405009 | No Perforce job exists for this issue. | 2 | 405045 | 5 years, 36 weeks, 2 days ago | 0|i1xp9z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1963 | Make JDK 7 the minimum requirement for Zookeeper |
Improvement | Resolved | Major | Fixed | Hongchao Deng | Edward Carter | Edward Carter | 10/Jul/14 13:20 | 18/Dec/14 05:31 | 17/Dec/14 17:02 | 3.5.0 | 3.5.1, 3.6.0 | build | 0 | 10 | ZOOKEEPER-2046 | JDK 6 stopped receiving public updates in early 2013: http://www.oracle.com/technetwork/java/eol-135779.html I propose making JDK 7 the minimum for Zookeeper going forward. One patch that I've personally submitted already would have been a good fit for Java 7's try-with-resources statement, and another pending patch fails to build on versions of Java prior to 7 because a unit test in it uses InetAddress.getLoopbackAddress(), which would be awkward to replace. I'm sure there are many other examples. |
404737 | No Perforce job exists for this issue. | 2 | 404775 | 5 years, 14 weeks ago | 0|i1xnm7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1962 | Add a CLI command to recursively list a znode and children |
New Feature | Closed | Minor | Fixed | Gautam Gopalakrishnan | Gautam Gopalakrishnan | Gautam Gopalakrishnan | 10/Jul/14 07:29 | 17/May/17 23:43 | 08/Sep/16 16:53 | 3.4.6 | 3.5.3, 3.6.0 | java client | 2 | 17 | 86400 | 86400 | 0% | When troubleshooting applications where znodes can be multiple levels deep (eg. HBase replication), it is handy to see all child znodes recursively rather than run an ls for each node manually. So I propose adding an option to the "ls" command (-r) which will list all child nodes under a given znode. |
0% | 0% | 86400 | 86400 | 404665 | No Perforce job exists for this issue. | 7 | 404703 | 3 years, 28 weeks ago |
Reviewed
|
0|i1xn6f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1961 | NPE in ZooKeeperServerMain.shutdown() |
Improvement | Open | Major | Unresolved | Unassigned | Steve Loughran | Steve Loughran | 08/Jul/14 11:56 | 09/Jul/14 09:13 | 3.4.6 | server | 0 | 2 | while trying to stop a server that appears not to have started properly, the {{shutdown()}} method triggered an NPE | 404219 | No Perforce job exists for this issue. | 0 | 404259 | 5 years, 37 weeks, 1 day ago | 0|i1xkg7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1960 | Improve logs in PurgeTxnLog.java to give more details |
Improvement | Open | Major | Unresolved | Unassigned | nijel | nijel | 08/Jul/14 10:29 | 08/Jul/14 10:29 | 0 | 0 | Improve logs in PurgeTxnLog.java to give more details Suggest to add logs in following scenarios 1. If no file to purge (deletion list is empty) - INFO 2. Add logs for debug purpose (txnLog, dataDir and snapDir, passed arguements) |
log | 404198 | No Perforce job exists for this issue. | 0 | 404238 | 5 years, 37 weeks, 2 days ago | 0|i1xkbr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1959 | Move zkEnv files to conf folder, since it is expected to be configured by user |
Improvement | Open | Major | Unresolved | Unassigned | nijel | nijel | 08/Jul/14 10:23 | 08/Jul/14 10:23 | 0 | 1 | Move zkEnv files to conf folder This is the common pattern followed across other hadoop components |
script | 404197 | No Perforce job exists for this issue. | 0 | 404237 | 5 years, 37 weeks, 2 days ago | 0|i1xkbj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1958 | Client does not detect rejected connections |
Bug | Open | Minor | Unresolved | Unassigned | Erik Anderson | Erik Anderson | 07/Jul/14 20:32 | 08/Jul/14 02:33 | 3.4.6 | c client | 0 | 1 | Windows 7 64bit | When attempting to connect to a zookeeper server that is not currently running, the connection will return the "connection refused" message through the timeout logic. This is because Windows is returning the error code through select->error rather than select->write (which is what the logic is apparently expecting) Patch is pending |
404053 | No Perforce job exists for this issue. | 2 | 404094 | 5 years, 37 weeks, 2 days ago | 0|i1xjg7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1957 | ReconfigTest can fail when checking remote peer MXBean ElectionAddress |
Test | Open | Minor | Unresolved | Unassigned | Bill Havanki | Bill Havanki | 07/Jul/14 14:33 | 07/Jul/14 19:18 | 3.4.6 | jmx | 0 | 1 | CThis happens most of the time when I run the {{ReconfigTest}} unit test on Mac OS X Mavericks, using Java 6 or 7. I get failures like this: {noformat} Mismatches ElectionAddress! expected:<[127.0.0.1]:12369> but was:<[localhost]:12369> junit.framework.AssertionFailedError: Mismatches ElectionAddress! expected:<[127.0.0.1]:12369> but was:<[localhost]:12369> at org.apache.zookeeper.test.ReconfigTest.assertRemotePeerMXBeanAttributes(ReconfigTest.java:967) at org.apache.zookeeper.test.ReconfigTest.testJMXBeanAfterRoleChange(ReconfigTest.java:887) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) {noformat} It has to do with {{RemotePeerBean.getElectionAddress()}}, which has an InetAddress of "localhost/127.0.0.1" and picks "localhost" over the IP address. I'm not sure if the IP address should have been picked, or if the test should allow "localhost". |
testing | 403967 | No Perforce job exists for this issue. | 0 | 404009 | 5 years, 37 weeks, 3 days ago | 0|i1xixj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1956 | Support Cleanup script in windows |
Bug | Open | Minor | Unresolved | Unassigned | nijel | nijel | 07/Jul/14 06:04 | 02/Mar/16 20:51 | 3.4.6, 3.5.0 | 1 | 2 | The script zkCleanup.sh support cleaning the zk data in linux system. The same function needs to be supported in windows also |
403868 | No Perforce job exists for this issue. | 0 | 403911 | 4 years, 3 weeks ago | Supporting cleaning snapshots log history via Windows script. | 0|i1xibr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1955 | EOFException on Reading Snapshot |
Bug | Open | Major | Unresolved | Unassigned | Aaron Zimmerman | Aaron Zimmerman | 04/Jul/14 18:30 | 30/Nov/15 10:13 | 3.4.5 | 2 | 7 | We have a 5 node zookeeper cluster that has been operating normally for several months. Starting a few days ago, the entire cluster crashes a few times per day, all nodes at the exact same time. We can't track down the exact issue, but deleting the snapshots and logs and restarting allows the cluster to come back up. We are running exhibitor to monitor the cluster. It appears that something bad gets into the logs, causing an EOFException and this cascades through the entire cluster: 2014-07-04 12:55:26,328 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83) at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:108) at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:740) 2014-07-04 12:55:26,328 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:744) Then the server dies, exhibitor tries to restart each node, and they all get stuck trying to replay the bad transaction, logging things like: 2014-07-04 12:58:52,734 [myid:1] - INFO [main:FileSnap@83] - Reading snapshot /var/lib/zookeeper/version-2/snapshot.300011fc0 2014-07-04 12:58:52,896 [myid:1] - DEBUG [main:FileTxnLog$FileTxnIterator@575] - Created new input stream /var/lib/zookeeper/version-2/log.300000021 2014-07-04 12:58:52,915 [myid:1] - DEBUG [main:FileTxnLog$FileTxnIterator@578] - Created new input archive /var/lib/zookeeper/version-2/log.300000021 2014-07-04 12:59:25,870 [myid:1] - DEBUG [main:FileTxnLog$FileTxnIterator@618] - EOF excepton java.io.EOFException: Failed to read /var/lib/zookeeper/version-2/log.300000021 2014-07-04 12:59:25,871 [myid:1] - DEBUG [main:FileTxnLog$FileTxnIterator@575] - Created new input stream /var/lib/zookeeper/version-2/log.300011fc2 2014-07-04 12:59:25,872 [myid:1] - DEBUG [main:FileTxnLog$FileTxnIterator@578] - Created new input archive /var/lib/zookeeper/version-2/log.300011fc2 2014-07-04 12:59:48,722 [myid:1] - DEBUG [main:FileTxnLog$FileTxnIterator@618] - EOF excepton java.io.EOFException: Failed to read /var/lib/zookeeper/version-2/log.300011fc2 And the cluster is dead. The only way we have found to recover is to delete all of the data and restart. [~fournc] Appreciate any assistance you can offer. |
403690 | No Perforce job exists for this issue. | 1 | 403733 | 4 years, 16 weeks, 3 days ago | 0|i1xh8f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1954 | StaticHostProvider loses IPv6 scope ID when resolving server addresses |
Bug | In Progress | Minor | Unresolved | Unassigned | Bill Havanki | Bill Havanki | 03/Jul/14 16:45 | 18/Jun/15 08:58 | 3.4.6 | java client | 2 | 3 | ZOOKEEPER-1476, ZOOKEEPER-1256 | I have been getting constant failures of the {{ClientPortBindTest}} unit test (see ZOOKEEPER-1256) on my Macbook. I traced the problem to loss of the IPv6 scope ID on the address chosen for the loopback address in the unit test. The address chosen is: fe80:0:0:0:0:0:0:1%1. The scope ID here is 1, after the percent sign. The scope ID is lost in the {{resolveAndShuffle()}} method of {{StaticHostProvider}}. The method uses {{InetAddress.getByAddress()}} which apparently does not preserve the scope ID in the host string it is passed. {{Inet6Address.getByAddress()}} can, although you have to parse the scope ID out of the host string yourself and pass it as its own parameter. |
403535 | No Perforce job exists for this issue. | 1 | 403579 |
Patch
|
4 years, 40 weeks ago | 0|i1xgaf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1953 | Add solution and project files to enable build with current Visual Studio editions (VS 2012/2013) - 32-bit and 64-bit. |
Improvement | Resolved | Major | Fixed | Orion Hodson | Patrick D. Hunt | Patrick D. Hunt | 02/Jul/14 23:49 | 15/Jul/14 10:09 | 11/Jul/14 11:52 | 3.5.0 | 3.5.0 | c client | 0 | 6 | ZOOKEEPER-1966 | Add solution and project files to enable build with current Visual Studio editions (VS 2012/2013) - 32-bit and 64-bit. | 403353 | No Perforce job exists for this issue. | 2 | 403404 | 5 years, 36 weeks, 2 days ago | 0|i1xf7r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1952 | Default log directory and file name can be changed |
Bug | Resolved | Minor | Fixed | nijel | nijel | nijel | 02/Jul/14 08:26 | 02/Mar/15 06:14 | 01/Mar/15 11:50 | 3.4.6 | 3.5.1, 3.6.0 | 0 | 7 | The log folder and log file name is configurable now. The default log folder is "." in distribution. So the log file (zookeeper.out) will be placed in bin folder Can this be changed to <zk_home>/logs/zookeeperserver-<hostname>.log ? |
403140 | No Perforce job exists for this issue. | 5 | 403198 | 5 years, 3 weeks, 3 days ago | 0|i1xdy7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1951 | Clean up raw type compile warnings under Java 7 |
Improvement | In Progress | Trivial | Unresolved | Unassigned | Bill Havanki | Bill Havanki | 01/Jul/14 11:57 | 18/Jun/15 08:58 | 3.4.6 | 0 | 1 | Compiling with Java 1.7.0_60 under Mac OS X 10.9.3, I get five warnings about raw types being found. (Compiler warnings are attached.) These can probably be cleaned up pretty easily. The warnings were observed in ZOOKEEPER-1477. |
generics, java7 | 402940 | No Perforce job exists for this issue. | 2 | 403000 | 5 years, 38 weeks, 2 days ago | 0|i1xcpj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1950 | configBackwardCompatibilityMode breaks compatibility |
Bug | Resolved | Major | Duplicate | Unassigned | Hongchao Deng | Hongchao Deng | 30/Jun/14 17:43 | 24/Jul/14 12:06 | 24/Jul/14 11:20 | 3.5.0 | 3.5.0 | 0 | 1 | ZOOKEEPER-1987 | The current implementation divide information of servers of legacy config into two separate dynamic config files. There is a problem. When we set "clientPort" variable in config file, it gets automatically erased and later on there is no information about "clientPort" in either the old or new (.dynamic) config file. It becomes a serious problem when users of *3.4* jump to *3.5* directly without changing their config: when a server crashes and restarts, there is no client port serving. For example, a legacy config might look like: ```zoo.cfg dataDir=/root/zookeeper/groupconfig/conf1/data syncLimit=5 initLimit=10 tickTime=2000 clientPort=2181 server.1=127.0.0.1:2222:2223 server.2=127.0.0.1:3333:3334 server.3=127.0.0.1:4444:4445 ``` After dynamic reconfig, it might look like ```zoo.cfg dataDir=/root/zookeeper/groupconfig/conf1/data syncLimit=5 tickTime=2000 initLimit=10 dynamicConfigFile=./zoo.cfg.dynamic ``` and ```zoo.cfg.dynamic server.1=127.0.0.1:2222:2223:participant server.2=127.0.0.1:3333:3334:participant server.3=127.0.0.1:4444:4445:participant version=e00000000 ``` This could be successfully started at first time. But when server restarts from crash, it never serve client port again. |
402747 | No Perforce job exists for this issue. | 0 | 402814 | 5 years, 35 weeks ago | 0|i1xbkv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1949 | recipes jar not included in the distribution package |
Bug | Resolved | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 27/Jun/14 04:01 | 11/Feb/15 06:16 | 11/Feb/15 02:02 | 3.4.7, 3.5.1, 3.6.0 | recipes | 0 | 5 | Following recipe jars doesn't exists in the distribution "zookeeper-3.4.6.tar.gz" recipes/election/zookeeper-3.4.6-recipes-election.jar recipes/lock/zookeeper-3.4.6-recipes-lock.jar recipes/queue/zookeeper-3.4.6-recipes-queue.jar |
build | 402262 | No Perforce job exists for this issue. | 1 | 402326 | 5 years, 6 weeks, 1 day ago | 0|i1x8kv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1948 | Enable JMX remote monitoring |
Improvement | Resolved | Major | Fixed | Biju Nair | Biju Nair | Biju Nair | 26/Jun/14 16:21 | 26/Dec/14 19:08 | 29/Sep/14 13:18 | 3.4.6 | 3.4.7, 3.5.1, 3.6.0 | server | 0 | 6 | ZOOKEEPER-1971 | All | The zooker server start up script includes the option to enable jmx monitoring but only locally. Can we update the script so that remote monitoring can also be enabled which will help in data collection and monitoring through a centralized monitoring tool. | 402140 | No Perforce job exists for this issue. | 4 | 402206 | 5 years, 12 weeks, 6 days ago | Changes in zkServer.sh to support JMX remote monitoring of Zookeeper processes. The change doesn't impact current installations and new installations requiring JMX remote monitoring need to set the jmx port to enable it. | 0|i1x7un: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1947 | Implement a better way to specify joiners |
Improvement | Open | Major | Unresolved | Unassigned | Alexander Shraer | Alexander Shraer | 26/Jun/14 14:48 | 26/Jun/14 14:48 | quorum, server | 0 | 4 | Currently a server must appear in its own config when it starts up. One of the reasons is that the server spec has LE ports through which it can connect to the leader. This means that when creating an initial configuration file of a server we'd like to add to the ensemble, we have to specify an invalid config where the new server already appears. This config is different from the current config and potentially from the new config that may eventually be installed. Besides being a bogus config, this method means we have to be careful when adding multiple servers to the ensemble. If, for example, the current config is (A, B, C) and we'd like to add D and E, server D can have the initial config of (A, B, C, D) and server E the config (A, B, C, E) but not (A, B, C, D, E) since this risks C, D, E forming a quorum and loosing data (suppose that C was initially down and now C, D, E don't know the state of A, B and don't know that their own config is bogus). To see why its risky consider the indistinguishable case where A, B, C, D, E are all just starting from scratch and A and B are down. One cleaner way to implement this would be to mark somehow the config rows corresponding to joining servers to indicate that they are not part of the config. Then take this information into account during leader election so that the joining servers can't vote to elect a leader. Such as server.5.joining=... There are probably other good options to address the issue. |
402122 | No Perforce job exists for this issue. | 0 | 402188 | 5 years, 39 weeks ago | 0|i1x7qn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1946 | Server logging should reflect dynamically reconfigured address |
Improvement | Resolved | Minor | Fixed | Niko Vuokko | Niko Vuokko | Niko Vuokko | 25/Jun/14 01:34 | 03/Jul/14 07:23 | 03/Jul/14 00:02 | 3.5.0 | 3.5.0 | server | 0 | 5 | The server's client address:port is part of the QuorumPeer's thread name and thus shown in logs. Thread name is not currently updated after dynamic reconfiguration of the address, resulting in confusing log entries. | patch | 401748 | No Perforce job exists for this issue. | 3 | 401817 | 5 years, 38 weeks ago |
Reviewed
|
0|i1x5g7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1945 | deb - zkCli.sh, zkServer.sh and zkEnv.sh regression caused by ZOOKEEPER-1663 |
Bug | Resolved | Major | Fixed | Mark Flickinger | Mark Flickinger | Mark Flickinger | 23/Jun/14 23:51 | 26/Jun/14 07:19 | 25/Jun/14 09:42 | 3.4.6 | 3.4.7, 3.5.0 | 0 | 4 | Linux (Debian 7) with dash shell | This is the same issue as ZOOKEEPER-1719, where the shebang for etc/init.d/zookeeper is set to /bin/sh, but needs to be fixed for deb packages(in src/packages/deb/init.d/zookeeper) | 401456 | No Perforce job exists for this issue. | 1 | 401529 | 5 years, 39 weeks ago | 0|i1x3of: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1944 | The 2pc picture of zookeeperInternals.pdf may be not correct. |
Improvement | Open | Major | Unresolved | Unassigned | zhangshuxing | zhangshuxing | 21/Jun/14 03:00 | 21/Jun/14 22:44 | 3.4.5 | documentation | 0 | 2 | In the page 6, the 2pc diagram may be not right.The commit direction may be from leader to followers. | 401131 | No Perforce job exists for this issue. | 0 | 401210 | 5 years, 39 weeks, 4 days ago | 0|i1x1pz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1943 | "src/contrib/zooinspector/NOTICE.txt" isn't complying to ".gitattributes" in branch-3.4 |
Bug | Resolved | Major | Fixed | Hongchao Deng | Hongchao Deng | Hongchao Deng | 19/Jun/14 13:36 | 22/Jun/14 20:08 | 22/Jun/14 20:07 | 3.4.6 | 3.4.7 | contrib | 0 | 3 | Using the git repo and checkout "branch-3.4", the "src/contrib/zooinspector/NOTICE.txt" file always shows up for changes but cannot be checkout or reset. This is caused by ".gitattributes" line "text=auto" where git automatically sets the line ending stuff. To solve this, I am gonna commit the git auto change and submit it as a patch for "branch-3.4". |
400734 | No Perforce job exists for this issue. | 1 | 400828 | 5 years, 39 weeks, 4 days ago |
Reviewed
|
0|i1wzd3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1942 | ZooKeeper OSGi package imports: org.ietf.jgss dependency missing from manifest |
Bug | Resolved | Major | Duplicate | Unassigned | Kalvin Misquith | Kalvin Misquith | 18/Jun/14 10:48 | 12/Nov/15 15:06 | 21/Aug/15 09:24 | 3.4.6 | 7 | 11 | ZOOKEEPER-2056 | For OSGI applications, the zookeeper manifest file should have org.ietf.jgss in its Import-Package statement. org.apache.zookeeper.client.ZooKeeperSaslClient imports org.ietf.jgss.*. The following ClassDefNotFoundError occurs without it. java.lang.NoClassDefFoundError: org.ietf.jgss.GSSException at java.lang.J9VMInternals.verifyImpl(Native Method) at java.lang.J9VMInternals.verify(J9VMInternals.java:94) at java.lang.J9VMInternals.initialize(J9VMInternals.java:171) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:945) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003) Caused by: java.lang.ClassNotFoundException: org.ietf.jgss.GSSException at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(ClassLoader.java:707) ... 5 more |
400425 | No Perforce job exists for this issue. | 0 | 400524 | 4 years, 19 weeks ago | OSGI | 0|i1wxhz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1941 | 2014-06-14 12:40:46,886 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server Will not attempt to authenticate using SASL (unknown error) |
Improvement | Open | Major | Unresolved | Unassigned | TIRUMALARAO KONDISETTY | TIRUMALARAO KONDISETTY | 14/Jun/14 13:52 | 14/Jun/14 13:52 | 3.4.6 | java client | 0 | 2 | linux x86-64 | 399457 | No Perforce job exists for this issue. | 0 | 399566 | 5 years, 40 weeks, 5 days ago | 0|i1wrlb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1940 | Integrate with Docker. |
Wish | Open | Trivial | Unresolved | Unassigned | David Medinets | David Medinets | 13/Jun/14 12:57 | 01/Sep/19 02:40 | 1 | 7 | ZOOKEEPER-3527 | Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. It's become quite popular and I'd like to see the zookeeper community suggest a standard way to run zookeeper inside docker containers. To get the conversation started, I have a working example at: https://github.com/medined/docker-zookeeper I hope there is a better technique that I used. And if there is please make suggestions. The difficulty, I think, posed by Docker, is that the images are started before the bridge network is created. This means, again I think, that zookeeper is running inside the container with no way to communicate with the ensemble for some non-trivial amount of time. My resolution to this was to force each each to wait 30 seconds before starting zookeeper. I still see connection errors in the logs, but eventually the cluster settles and everything seems to work. I'm hoping that someone which more networking experience than I can find a way to eliminate that 30 second delay and the connection errors during startup. Thanks for reading this far. |
399344 | No Perforce job exists for this issue. | 0 | 399453 | 28 weeks, 4 days ago | 0|i1wqwv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1939 | ReconfigRecoveryTest.testNextConfigUnreachable is failing |
Bug | Resolved | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 12/Jun/14 02:39 | 26/Jun/14 07:19 | 25/Jun/14 13:29 | 3.4.7, 3.5.0 | tests | 0 | 4 | Following is the failure log message: 2014-06-11 23:53:22,538 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@62] - TEST METHOD FAILED testNextConfigUnreachable java.lang.AssertionError: QP failed to shutdown in 30 seconds: QuorumPeer[myid=0]/127.0.0.1:11251 at org.junit.Assert.fail(Assert.java:93) at org.apache.zookeeper.test.QuorumBase.shutdown(QuorumBase.java:393) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$TestQPMain.shutdown(QuorumPeerTestBase.java:52) at org.apache.zookeeper.server.quorum.QuorumPeerTestBase$MainThread.shutdown(QuorumPeerTestBase.java:161) at org.apache.zookeeper.server.quorum.ReconfigRecoveryTest.testNextConfigUnreachable(ReconfigRecoveryTest.java:268) |
399007 | No Perforce job exists for this issue. | 1 | 399124 | 5 years, 39 weeks ago |
Reviewed
|
0|i1wow7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1938 | bump version in the C library as we prepare for 3.5.0 release |
Task | Resolved | Trivial | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 12/Jun/14 01:04 | 26/Jun/14 07:19 | 25/Jun/14 13:36 | 3.5.0 | 3.5.0 | c client | 0 | 5 | building the C library from trunk now has 3.4.0 everywhere which makes it confusing when testing, so lets bump the version. | 398994 | No Perforce job exists for this issue. | 2 | 399111 | 5 years, 39 weeks ago |
Reviewed
|
0|i1wotb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1937 | init script needs fixing for ZOOKEEPER-1719 |
Bug | Resolved | Major | Fixed | Marshall McMullen | Nathan Sullivan | Nathan Sullivan | 10/Jun/14 23:28 | 25/Jul/14 07:25 | 24/Jul/14 19:17 | 3.4.6 | 0 | 7 | Linux (Ubuntu 12.04) | ZOOKEEPER-1719 changed the interpreter to bash for zkCli.sh, zkServer.sh and zkEnv.sh, but did not change src/packages/deb/init.d/zookeeper This causes the following failure using /bin/sh [...] root@hostname:~# service zookeeper stop /etc/init.d/zookeeper: 81: /usr/libexec/zkEnv.sh: Syntax error: "(" unexpected (expecting "fi") Simple fix, change the shebang to #!/bin/bash - tested and works fine. |
398701 | No Perforce job exists for this issue. | 2 | 398825 | 5 years, 34 weeks, 6 days ago |
Reviewed
|
0|i1wn33: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1936 | Server exits when unable to create data directory due to race |
Bug | Resolved | Minor | Fixed | Ted Yu | Harald Musum | Harald Musum | 10/Jun/14 03:25 | 24/Jan/20 09:20 | 23/Jan/20 10:47 | 3.4.6, 3.5.0 | 3.6.0 | server | 3 | 16 | 0 | 4200 | We sometime see issues with ZooKeeper server not starting and seeing this error in the log: [2014-05-27 09:29:48.248] ERROR : - .org.apache.zookeeper.server.ZooKeeperServerMain Unexpected exception, exiting abnormally\nexception=\njava.io.IOException: Unable to create data directory /home/y/var/zookeeper/version-2\n\tat org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:85)\n\tat org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:103)\n\tat org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86)\n\tat org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52)\n\tat org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)\n\tat org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)\n\t [...] Stack trace from JVM gives this: "PurgeTask" daemon prio=10 tid=0x000000000201d000 nid=0x1727 runnable [0x00007f55d7dc7000] java.lang.Thread.State: RUNNABLE at java.io.UnixFileSystem.createDirectory(Native Method) at java.io.File.mkdir(File.java:1310) at java.io.File.mkdirs(File.java:1337) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:84) at org.apache.zookeeper.server.PurgeTxnLog.purge(PurgeTxnLog.java:68) at org.apache.zookeeper.server.DatadirCleanupManager$PurgeTask.run(DatadirCleanupManager.java:140) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) "zookeeper server" prio=10 tid=0x00000000027df800 nid=0x1715 runnable [0x00007f55d7ed8000] java.lang.Thread.State: RUNNABLE at java.io.UnixFileSystem.createDirectory(Native Method) at java.io.File.mkdir(File.java:1310) at java.io.File.mkdirs(File.java:1337) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:84) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:103) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) [...] So it seems that when autopurge is used (as it is in our case), it might happen at the same time as starting the server itself. In FileTxnSnapLog() it will check if the directory exists and create it if not. These two tasks do this at the same time, and mkdir fails and server exits the JVM. |
100% | 100% | 4200 | 0 | pull-request-available | 398490 | No Perforce job exists for this issue. | 7 | 398615 | 8 weeks ago | 0|i1wlsf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1935 | Current ZooKeeper C Library (Client) makes a distinction between removing a child node and adding a child node on receiving ZOO_CHILD_EVENT session event |
Improvement | Open | Major | Unresolved | Unassigned | Fuyan Wang | Fuyan Wang | 10/Jun/14 01:46 | 10/Jun/14 01:46 | c client | 0 | 1 | I want to determine whether a node adds or removes a child node. But based on the currently implementation, I can only to get the session event ZOO_CHILD_EVENT. There is no ways to determine whether it is an add child event or a delete child event. If we enhance this event, it can benefit people having a similar requirement. Thanks. |
398474 | No Perforce job exists for this issue. | 0 | 398599 | 5 years, 41 weeks, 2 days ago | 0|i1wlov: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1934 | Stale data received from sync'd ensemble peer |
Bug | Open | Major | Unresolved | Unassigned | Marshall McMullen | Marshall McMullen | 05/Jun/14 19:01 | 16/Sep/14 12:04 | 3.5.0 | 0 | 5 | In our regression testing we encountered an error wherein we were caching a value we read from zookeeper and then experienced session loss. We subsequently got reconnected to a different zookeeper server. When we tried to read the same path from this new zookeeper server we are getting a stale value. Specifically, we are reading "/binchanges" and originally got back a value of "3" from the first server. After we lost connection and reconnected before the session timeout, we then read "/binchanges" from the new server and got back a value of "2". In our code path we never set this value from 3 to 2. We throw an assertion if the value ever goes backwards. Which is how we caught this error. It's my understanding of the single system image guarantee that this should never be allowed. I realize that the single system image guarantee is still quorum based and it's certainly possible that a minority of the ensemble may have stale data. However, I also believe that each client has to send the highest zxid it's seen as part of its connection request to the server. And if the server it's connecting to has a smaller zxid than the value the client sends, then the connection request should be refused. Assuming I have all of that correct, then I'm at a loss for how this happened. The failure happened around Jun 4 08:13:44. Just before that, at June 4 08:13:30 there was a round of leader election. During that round of leader election we voted server with id=4 and zxid=0x300001c4c. This then led to a new zxid=0x400000001. The new leader sends a diff to all the servers including the one we will soon read the stale data from (id=2). Server with ID=2's log files also reflect that as of 08:13:43 it was up to date and current with an UPTODATE message. I'm going to attach log files from all 5 ensemble nodes. I also used zktreeutil to dump the database out for the 5 ensemble nodes. I diff'd those, and compared them all for correctness. 1 of the nodes (id=2) has a massively divergent zktreeutil dump than the other 4 nodes even though it received the diff from the new leader. In the attachments there are 5 nodes. I will number each log file by it's zookeeper id, e.g. node4.log. |
396939 | No Perforce job exists for this issue. | 5 | 397057 | 5 years, 27 weeks, 2 days ago | 0|i1wc6f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1933 | Windows release build of zk client cannot connect to zk server |
Bug | Resolved | Major | Fixed | Orion Hodson | Norris Lee | Norris Lee | 03/Jun/14 12:59 | 25/Jul/14 13:39 | 25/Jul/14 12:58 | 3.4.6 | 3.5.0 | c client | 1 | 7 | When building zookeeper in Visual Studio in debug mode, the client can connect to the server without error. When building in release mode, I get a continuous error message: {code} 2014-06-02 11:25:20,070:7144(0xc84):ZOO_INFO@zookeeper_init_internal@1008: Initiating client connection, host=192.168.39.43:5181 sessionTimeout=30000 watcher=10049C90 sessionId=0 sessionPasswd=<null> context=001FC0F0 flags=0 2014-06-02 11:25:20,072:7144(0xc84):ZOO_DEBUG@start_threads@221: starting threads... 2014-06-02 11:25:20,072:7144(0x1ea0):ZOO_DEBUG@do_completion@460: started completion thread 2014-06-02 11:25:20,072:7144(0x1e08):ZOO_DEBUG@do_io@403: started IO thread 2014-06-02 11:25:20,072:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1148: [OLD] count=0 capacity=0 next=0 hasnext=0 2014-06-02 11:25:20,072:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1151: [NEW] count=1 capacity=16 next=0 hasnext=1 2014-06-02 11:25:20,072:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1160: Using next from NEW=192.168.39.43:5181 2014-06-02 11:25:20,072:7144(0x1e08):ZOO_DEBUG@zookeeper_interest@1992: [zk] connect() 2014-06-02 11:25:20,158:7144(0x1e08):ZOO_ERROR@handle_socket_error_msg@1847: Socket [192.168.39.43:5181] zk retcode=-4, errno=10035(Unknown error): failed to send a handshake packet: Unknown error 2014-06-02 11:25:20,158:7144(0x1e08):ZOO_DEBUG@handle_error@1595: Previous connection=[192.168.39.43:5181] delay=0 2014-06-02 11:25:20,158:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1148: [OLD] count=0 capacity=0 next=0 hasnext=0 2014-06-02 11:25:20,158:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1151: [NEW] count=1 capacity=16 next=0 hasnext=1 2014-06-02 11:25:20,158:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1160: Using next from NEW=192.168.39.43:5181 2014-06-02 11:25:20,158:7144(0x1e08):ZOO_DEBUG@zookeeper_interest@1992: [zk] connect() 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_ERROR@handle_socket_error_msg@1847: Socket [192.168.39.43:5181] zk retcode=-4, errno=10035(Unknown error): failed to send a handshake packet: Unknown error 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@handle_error@1595: Previous connection=[192.168.39.43:5181] delay=0 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1148: [OLD] count=0 capacity=0 next=0 hasnext=0 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1151: [NEW] count=1 capacity=16 next=0 hasnext=1 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1160: Using next from NEW=192.168.39.43:5181 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@zookeeper_interest@1992: [zk] connect() 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_ERROR@handle_socket_error_msg@1847: Socket [192.168.39.43:5181] zk retcode=-4, errno=10035(Unknown error): failed to send a handshake packet: Unknown error 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@handle_error@1595: Previous connection=[192.168.39.43:5181] delay=0 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1148: [OLD] count=0 capacity=0 next=0 hasnext=0 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1151: [NEW] count=1 capacity=16 next=0 hasnext=1 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@get_next_server_in_reconfig@1160: Using next from NEW=192.168.39.43:5181 2014-06-02 11:25:20,159:7144(0x1e08):ZOO_DEBUG@zookeeper_interest@1992: [zk] connect() {code} |
396397 | No Perforce job exists for this issue. | 8 | 396519 | 5 years, 34 weeks, 6 days ago |
Reviewed
|
0|i1w8v3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1932 | Remove deprecated LeaderElection class |
Bug | Resolved | Major | Fixed | Michael Han | Michi Mutsuzaki | Michi Mutsuzaki | 01/Jun/14 20:58 | 11/May/17 12:42 | 11/May/17 11:01 | 3.5.0 | 3.6.0 | leaderElection | 0 | 7 | ZOOKEEPER-2483 | org.apache.zookeeper.test.LETest.testLE is failing on trunk once in a while. I'm not able to reproduce the failure on my box. I looked at the log, but I couldn't quite figure out what's going on. https://builds.apache.org/view/S-Z/view/ZooKeeper/job/ZooKeeper-trunk/2315/testReport/ Update: ====== Because LE is deprecated there is not much points on spending effort fixing it, as discussed in the JIRA. Updated JIRA title to reflect the state of the issue. |
396034 | No Perforce job exists for this issue. | 3 | 396160 | 2 years, 45 weeks ago | 0|i1w6nr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1931 | intern project idea: implement zab |
New Feature | Open | Major | Unresolved | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 01/Jun/14 16:09 | 14/Jan/16 20:12 | 0 | 6 | The goal of this project is to define an interface for replication protocol and implement the interface using ZAB. This project will most likely be done outside of ZooKeeper to avoid impacting the stability of the ZooKeeper code base, but I'm opening a JIRA here to gauge interest and get feedback from ZooKeeper community. There are 2 main motivations for this project: 1. There are many use cases that need a replication protocol like ZAB, but ZooKeeper's hierarchical data model doesn't work well. It's difficult to use ZAB without ZooKeeper with the way ZooKeeper code is currently structured. 2. It's valuable to have a common interface for replication protocol to build services on. This allows you to plug in different implementations for benchmarking and testing for correctness. This point is related to ZOOKEEPER-30. The project is roughly broken into 4 pieces: 1. Define the interface for replication protocol. It's very important to get the interface right. I'd appreciate if you guys can help define the interface. 2. Implement the interface with single node ZAB. 3. Implement a simple reference service, something like a key-value store or a benchmark tool. 4. Implement ZAB, either from scratch or by refactoring / curving off unnecessary parts from the ZooKeeper code base. I have some questions: - How do things like session tracker and dynamic reconfiguration fit into this? Should they be separate optional interfaces? - Where should this project belong to? Is it worth making this an incubator project, or should I just put the code on github? I'd like to make it easy for people from different organizations to collaborate (in terms of license grant and all) from the beginning. |
395993 | No Perforce job exists for this issue. | 0 | 396119 | 5 years, 41 weeks, 1 day ago | 0|i1w6en: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1930 | A typo in zookeeper recipes.html |
Bug | Resolved | Minor | Fixed | Chengwei Yang | Chengwei Yang | Chengwei Yang | 28/May/14 01:47 | 29/May/14 07:35 | 28/May/14 13:58 | documentation | 0 | 4 | svn trunk repo, revision 1597694 | With the sequence flag, ZooKeeper automatically appends a sequence number that is *greater that* any one previously appended to a child of "/election". *greater that* above should be *greater than* |
395184 | No Perforce job exists for this issue. | 1 | 395315 | 5 years, 43 weeks ago | 0|i1w1gf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1929 | std::length_error on update children |
Bug | Closed | Major | Fixed | Charles Strahan | Eduard White | Eduard White | 24/May/14 00:13 | 21/Jul/16 16:18 | 21/Nov/15 15:15 | 3.3.5, 3.4.6, 3.5.1 | 3.4.8, 3.5.2 | contrib-zkfuse | 0 | 5 | debian | Trying to open zk root directory: ./zkfuse -z localhost:2181 -m /CLOUD/zookeeper -d 1 [0x7f89362d0780] INFO zkfuse null - Starting zkfuse cacheSize = 256, debug = 1, forceDirSuffix = "._dir_", mount = "/CLOUD/zookeeper", name = "_data_", zookeeper = "localhost:2181", optind = 6, argc = 6, current arg = "NULL" 1 [0x7f89362d0780] INFO zkfuse null - Create ZK adapter 1 [0x7f89362d0780] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::ZooKeeperAdapter(zk::ZooKeeperConfig, zk::ZKEventListener*, bool)::Trace::Trace(const void*) 0x434ecd Enter 1 [0x7f89362d0780] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::ZooKeeperAdapter(zk::ZooKeeperConfig, zk::ZKEventListener*, bool)::Trace::~Trace() 0x434ecd Exit 1 [0x7f89362bd700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::processEvents()::Trace::Trace(const void*) 0x434df4 Enter 1 [0x7f89327bb700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::processUserEvents()::Trace::Trace(const void*) 0x434e60 Enter 1 [0x7f89362d0780] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::reconnect()::Trace::Trace(const void*) 0x434c71 Enter 1 [0x7f89362d0780] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::disconnect()::Trace::Trace(const void*) 0x434c4c Enter 1 [0x7f89362d0780] TRACE zookeeper.adapter null - mp_zkHandle: (nil), state 0 1 [0x7f89362d0780] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::disconnect()::Trace::~Trace() 0x434c4c Exit 2014-05-24 08:07:44,860:20540(0x7f89362d0780):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.6 2014-05-24 08:07:44,860:20540(0x7f89362d0780):ZOO_INFO@log_env@716: Client environment:host.name=nanoha 2014-05-24 08:07:44,860:20540(0x7f89362d0780):ZOO_INFO@log_env@723: Client environment:os.name=Linux 2014-05-24 08:07:44,860:20540(0x7f89362d0780):ZOO_INFO@log_env@724: Client environment:os.arch=3.2.0-4-amd64 2014-05-24 08:07:44,860:20540(0x7f89362d0780):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Debian 3.2.54-2 2014-05-24 08:07:44,860:20540(0x7f89362d0780):ZOO_INFO@log_env@733: Client environment:user.name=root 2014-05-24 08:07:44,860:20540(0x7f89362d0780):ZOO_INFO@log_env@741: Client environment:user.home=/root 2014-05-24 08:07:44,860:20540(0x7f89362d0780):ZOO_INFO@log_env@753: Client environment:user.dir=/opt/zoo/3.4.6/build/contrib/zkfuse/src 2014-05-24 08:07:44,860:20540(0x7f89362d0780):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=localhost:2181 sessionTimeout=1000 watcher=0x429780 sessionId=0 sessionPasswd=<null> context=0x9b5e30 flags=0 2014-05-24 08:07:44,861:20540(0x7f89362d0780):ZOO_DEBUG@start_threads@221: starting threads... 2 [0x7f89362d0780] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::setState(zk::ZooKeeperAdapter::AdapterState)::Trace::Trace(const void*) 0x434c43 Enter 2 [0x7f89362d0780] INFO zookeeper.adapter null - Adapter state transition: 0 -> 1 2014-05-24 08:07:44,861:20540(0x7f8931899700):ZOO_DEBUG@do_completion@459: started completion thread 2 [0x7f89362d0780] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::setState(zk::ZooKeeperAdapter::AdapterState)::Trace::~Trace() 0x434c43 Exit 3 [0x7f89362d0780] DEBUG zookeeper.adapter null - mp_zkHandle: 0x9bbba0, state 1 2014-05-24 08:07:44,861:20540(0x7f893209a700):ZOO_DEBUG@do_io@367: started IO thread 3 [0x7f89362d0780] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::reconnect()::Trace::~Trace() 0x434c71 Exit ZOOKEEPER_ROOT_CHILDREN_WATCH_BUG enabled 3 [0x7f89362d0780] INFO zkfuse null - Initialize fuse 2014-05-24 08:07:44,861:20540(0x7f893209a700):ZOO_INFO@check_events@1705: initiated connection to server [127.0.0.1:2181] FUSE library version: 2.9.3 nullpath_ok: 0 nopath: 0 utime_omit_ok: 0 unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0 INIT: 7.17 flags=0x0000047b max_readahead=0x00020000 INIT: 7.19 flags=0x00000013 max_readahead=0x00020000 max_write=0x00020000 max_background=0 congestion_threshold=0 unique: 1, success, outsize: 40 2014-05-24 08:07:44,909:20540(0x7f893209a700):ZOO_INFO@check_events@1752: session establishment complete on server [127.0.0.1:2181], sessionId=0x1461f2be1b10025, negotiated timeout=4000 2014-05-24 08:07:44,909:20540(0x7f893209a700):ZOO_DEBUG@check_events@1758: Calling a watcher for a ZOO_SESSION_EVENT and the state=ZOO_CONNECTED_STATE 2014-05-24 08:07:44,910:20540(0x7f8931899700):ZOO_DEBUG@process_completions@2113: Calling a watcher for node [], type = -1 event=ZOO_SESSION_EVENT 51 [0x7f8931899700] TRACE zookeeper.adapter null - zk::zkWatcher(zhandle_t*, int, int, const char*, void*)::Trace::Trace(const void*) 0x434e10 Enter 51 [0x7f8931899700] INFO zookeeper.adapter null - Received a ZK event - type: -1, state: 3, path: '' 51 [0x7f8931899700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::enqueueEvent(int, int, const string&)::Trace::Trace(const void*) 0x434e02 Enter 51 [0x7f8931899700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::enqueueEvent(int, int, const string&)::Trace::~Trace() 0x434e02 Exit 51 [0x7f8931899700] TRACE zookeeper.adapter null - zk::zkWatcher(zhandle_t*, int, int, const char*, void*)::Trace::~Trace() 0x434e10 Exit 51 [0x7f89362bd700] INFO zookeeper.adapter null - Received SESSION event, state: 3. Adapter state: 1 51 [0x7f89362bd700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::setState(zk::ZooKeeperAdapter::AdapterState)::Trace::Trace(const void*) 0x434c43 Enter 51 [0x7f89362bd700] INFO zookeeper.adapter null - Adapter state transition: 1 -> 2 51 [0x7f89362bd700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::setState(zk::ZooKeeperAdapter::AdapterState)::Trace::~Trace() 0x434c43 Exit 52 [0x7f89327bb700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::handleEvent(int, int, const string&)::Trace::Trace(const void*) 0x434e37 Enter 52 [0x7f89327bb700] TRACE zookeeper.adapter null - type: -1, state 3, path: 52 [0x7f89327bb700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::handleEvent(int, int, const string&, const Listener2Context&)::Trace::Trace(const void*) 0x434de7 Enter 52 [0x7f89327bb700] DEBUG zkfuse null - eventReceived() eventType -1, eventState 3, path 52 [0x7f89327bb700] TRACE zkfuse null - *** CONNECTED *** 52 [0x7f89327bb700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::handleEvent(int, int, const string&, const Listener2Context&)::Trace::~Trace() 0x434de7 Exit 52 [0x7f89327bb700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::handleEvent(int, int, const string&)::Trace::~Trace() 0x434e37 Exit 2014-05-24 08:07:46,196:20540(0x7f893209a700):ZOO_DEBUG@zookeeper_process@2264: Got ping response in 0 ms 2014-05-24 08:07:47,531:20540(0x7f893209a700):ZOO_DEBUG@zookeeper_process@2264: Got ping response in 0 ms 2014-05-24 08:07:48,865:20540(0x7f893209a700):ZOO_DEBUG@zookeeper_process@2264: Got ping response in 0 ms 2014-05-24 08:07:50,200:20540(0x7f893209a700):ZOO_DEBUG@zookeeper_process@2264: Got ping response in 0 ms 2014-05-24 08:07:51,535:20540(0x7f893209a700):ZOO_DEBUG@zookeeper_process@2264: Got ping response in 0 ms 2014-05-24 08:07:52,869:20540(0x7f893209a700):ZOO_DEBUG@zookeeper_process@2264: Got ping response in 0 ms unique: 2, opcode: ACCESS (34), nodeid: 1, insize: 48, pid: 16822 unique: 2, error: -38 (Function not implemented), outsize: 16 unique: 3, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 16822 getattr / 8606 [0x7f8931098700] DEBUG zkfuse null - zkfuse_getattr(path /) 8606 [0x7f8931098700] DEBUG zkfuse null - getattr(path /) 8606 [0x7f8931098700] DEBUG zkfuse null - getZkPath(path /) 8606 [0x7f8931098700] DEBUG zkfuse null - getZkPath returns /, nameType 2 8606 [0x7f8931098700] DEBUG zkfuse null - open(path /, justCreated 0) 8606 [0x7f8931098700] DEBUG zkfuse null - allocate(path /) 8606 [0x7f8931098700] DEBUG zkfuse null - not found 8606 [0x7f8931098700] DEBUG zkfuse null - free list empty, resize handle 1 8606 [0x7f8931098700] DEBUG zkfuse null - constructor() path / 8606 [0x7f8931098700] DEBUG zkfuse null - incRefCount(count 0) path / 8606 [0x7f8931098700] DEBUG zkfuse null - incRefCount returns 1 8606 [0x7f8931098700] DEBUG zkfuse null - numInUse 1 8606 [0x7f8931098700] DEBUG zkfuse null - allocate returns 1, newFile 1 8606 [0x7f8931098700] DEBUG zkfuse null - update(newFile 1) path / 8606 [0x7f8931098700] DEBUG zkfuse null - initialized children 0, data 0 8606 [0x7f8931098700] DEBUG zkfuse null - has children watch 0, data watch 0 8606 [0x7f8931098700] DEBUG zkfuse null - update children 8606 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::basic_string<char> >&, const string&, zk::ZKEventListener*, void*)::Trace::Trace(const void*) 0x434ede Enter 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::Trace(const void*) 0x434cc4 Enter 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::~Trace() 0x434cc4 Exit 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::Trace(const void*) 0x434cb3 Enter 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::~Trace() 0x434cb3 Exit 2014-05-24 08:07:53,465:20540(0x7f8931098700):ZOO_DEBUG@zoo_awget_children_@2874: Sending request xid=0x53801b11 for path [/] to 127.0.0.1:2181 2014-05-24 08:07:53,465:20540(0x7f893209a700):ZOO_DEBUG@process_sync_completion@1870: Processing sync_completion with type=3 xid=0x53801b11 rc=0 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::basic_string<char> >&, const string&, zk::ZKEventListener*, void*)::Trace::~Trace() 0x434ede Exit 8607 [0x7f8931098700] DEBUG zkfuse null - update children done 8607 [0x7f8931098700] DEBUG zkfuse null - node first use or reuse 8607 [0x7f8931098700] DEBUG zkfuse null - update data 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeData(const string&, zk::ZKEventListener*, void*, Stat*)::Trace::Trace(const void*) 0x434e82 Enter 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::Trace(const void*) 0x434cc4 Enter 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::~Trace() 0x434cc4 Exit 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::Trace(const void*) 0x434cb3 Enter 8607 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::~Trace() 0x434cb3 Exit 2014-05-24 08:07:53,466:20540(0x7f8931098700):ZOO_DEBUG@zoo_awget@2661: Sending request xid=0x53801b12 for path [/] to 127.0.0.1:2181 2014-05-24 08:07:53,466:20540(0x7f893209a700):ZOO_DEBUG@process_sync_completion@1870: Processing sync_completion with type=2 xid=0x53801b12 rc=0 8608 [0x7f8931098700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeData(const string&, zk::ZKEventListener*, void*, Stat*)::Trace::~Trace() 0x434e82 Exit 8608 [0x7f8931098700] DEBUG zkfuse null - update data done, latest version 0 8608 [0x7f8931098700] DEBUG zkfuse null - update set active version 0 8608 [0x7f8931098700] DEBUG zkfuse null - update returns 0 8608 [0x7f8931098700] DEBUG zkfuse null - open returns 1 8608 [0x7f8931098700] DEBUG zkfuse null - getattr(nameType 2) path / 8608 [0x7f8931098700] DEBUG zkfuse null - isRegNameType(nameType 2) returns 0 8608 [0x7f8931098700] DEBUG zkfuse null - directory 8608 [0x7f8931098700] DEBUG zkfuse null - hasChild(childPath /.zkfuse.dir) returns 0 8608 [0x7f8931098700] DEBUG zkfuse null - getattr returns 0 8608 [0x7f8931098700] DEBUG zkfuse null - close() path / 8608 [0x7f8931098700] DEBUG zkfuse null - flush() path / 8608 [0x7f8931098700] DEBUG zkfuse null - not dirty 8608 [0x7f8931098700] DEBUG zkfuse null - flush returns 0 8608 [0x7f8931098700] DEBUG zkfuse null - deallocate(handle 1) 8608 [0x7f8931098700] DEBUG zkfuse null - incRefCount(count -1) path / 8608 [0x7f8931098700] DEBUG zkfuse null - incRefCount returns 0 8608 [0x7f8931098700] DEBUG zkfuse null - path / ref count 0 8608 [0x7f8931098700] DEBUG zkfuse null - deallocate done 8608 [0x7f8931098700] DEBUG zkfuse null - close returns 0 8608 [0x7f8931098700] DEBUG zkfuse null - getattr returns 0 8608 [0x7f8931098700] DEBUG zkfuse null - zkfuse_getattr returns 0 unique: 3, success, outsize: 120 unique: 4, opcode: OPENDIR (27), nodeid: 1, insize: 48, pid: 16822 opendir flags: 0x98800 / unique: 5, opcode: INTERRUPT (36), nodeid: 0, insize: 48, pid: 0 8610 [0x7f8930897700] DEBUG zkfuse null - zkfuse_opendir(path /) INTERRUPT: 4 8610 [0x7f8930897700] DEBUG zkfuse null - getZkPath(path /) 8610 [0x7f8930897700] DEBUG zkfuse null - getZkPath returns /, nameType 2 8610 [0x7f8930897700] DEBUG zkfuse null - open(path /, justCreated 0) 8610 [0x7f8930897700] DEBUG zkfuse null - allocate(path /) 8610 [0x7f8930897700] DEBUG zkfuse null - found 8610 [0x7f8930897700] DEBUG zkfuse null - incRefCount(count 1) path / 8610 [0x7f8930897700] DEBUG zkfuse null - incRefCount returns 1 8610 [0x7f8930897700] DEBUG zkfuse null - resurrecting zombie, numInUse 1 8610 [0x7f8930897700] DEBUG zkfuse null - allocate returns 1, newFile 0 8611 [0x7f8930897700] DEBUG zkfuse null - update(newFile 0) path / 8611 [0x7f8930897700] DEBUG zkfuse null - initialized children 1, data 1 8611 [0x7f8930897700] DEBUG zkfuse null - has children watch 1, data watch 1 8611 [0x7f8930897700] DEBUG zkfuse null - update children 8611 [0x7f8930897700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::basic_string<char> >&, const string&, zk::ZKEventListener*, void*)::Trace::Trace(const void*) 0x434ede Enter 8611 [0x7f8930897700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::Trace(const void*) 0x434cc4 Enter 8611 [0x7f8930897700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::~Trace() 0x434cc4 Exit 8611 [0x7f8930897700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::Trace(const void*) 0x434cb3 Enter 8611 [0x7f8930897700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::~Trace() 0x434cb3 Exit 2014-05-24 08:07:53,469:20540(0x7f8930897700):ZOO_DEBUG@zoo_awget_children_@2874: Sending request xid=0x53801b13 for path [/] to 127.0.0.1:2181 2014-05-24 08:07:53,469:20540(0x7f893209a700):ZOO_DEBUG@process_sync_completion@1870: Processing sync_completion with type=3 xid=0x53801b13 rc=0 8611 [0x7f8930897700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::basic_string<char> >&, const string&, zk::ZKEventListener*, void*)::Trace::~Trace() 0x434ede Exit 8611 [0x7f8930897700] DEBUG zkfuse null - update children done 8611 [0x7f8930897700] DEBUG zkfuse null - node first use or reuse 8611 [0x7f8930897700] DEBUG zkfuse null - update set active version 0 8611 [0x7f8930897700] DEBUG zkfuse null - update returns 0 8611 [0x7f8930897700] DEBUG zkfuse null - open returns 1 8611 [0x7f8930897700] DEBUG zkfuse null - incOpenDirCount(count 1) path / 8611 [0x7f8930897700] DEBUG zkfuse null - incOpenDirCount returns 1 8611 [0x7f8930897700] DEBUG zkfuse null - zkfuse_opendir returns 0 opendir[1] flags: 0x98800 / unique: 4, success, outsize: 32 unique: 6, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 16822 readdir[1] from 0 8612 [0x7f8913fff700] DEBUG zkfuse null - zkfuse_readdir(path /, offset 0) 8612 [0x7f8913fff700] DEBUG zkfuse null - readdir(offset 0) path / 8612 [0x7f8913fff700] DEBUG zkfuse null - isMeta(childName /aliases.json) returns 0 8612 [0x7f8913fff700] DEBUG zkfuse null - isMeta(childName /clusterstate.json) returns 0 8612 [0x7f8913fff700] DEBUG zkfuse null - isMeta(childName /collections) returns 0 8612 [0x7f8913fff700] DEBUG zkfuse null - isMeta(childName /configs) returns 0 8612 [0x7f8913fff700] DEBUG zkfuse null - isMeta(childName /live_nodes) returns 0 8612 [0x7f8913fff700] DEBUG zkfuse null - isMeta(childName /overseer) returns 0 8612 [0x7f8913fff700] DEBUG zkfuse null - isMeta(childName /overseer_elect) returns 0 8612 [0x7f8913fff700] DEBUG zkfuse null - isMeta(childName /zookeeper) returns 0 8612 [0x7f8913fff700] DEBUG zkfuse null - open(path /aliases.json, justCreated 0) 8612 [0x7f8913fff700] DEBUG zkfuse null - allocate(path /aliases.json) 8612 [0x7f8913fff700] DEBUG zkfuse null - not found 8612 [0x7f8913fff700] DEBUG zkfuse null - free list empty, resize handle 2 8612 [0x7f8913fff700] DEBUG zkfuse null - constructor() path /aliases.json 8612 [0x7f8913fff700] DEBUG zkfuse null - incRefCount(count 0) path /aliases.json 8612 [0x7f8913fff700] DEBUG zkfuse null - incRefCount returns 1 8612 [0x7f8913fff700] DEBUG zkfuse null - numInUse 2 8612 [0x7f8913fff700] DEBUG zkfuse null - allocate returns 2, newFile 1 8612 [0x7f8913fff700] DEBUG zkfuse null - update(newFile 1) path /aliases.json 8612 [0x7f8913fff700] DEBUG zkfuse null - initialized children 0, data 0 8612 [0x7f8913fff700] DEBUG zkfuse null - has children watch 0, data watch 0 8613 [0x7f8913fff700] DEBUG zkfuse null - update children 8613 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::basic_string<char> >&, const string&, zk::ZKEventListener*, void*)::Trace::Trace(const void*) 0x434ede Enter 8613 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::Trace(const void*) 0x434cc4 Enter 8613 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::~Trace() 0x434cc4 Exit 8613 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::Trace(const void*) 0x434cb3 Enter 8613 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::~Trace() 0x434cb3 Exit 2014-05-24 08:07:53,471:20540(0x7f8913fff700):ZOO_DEBUG@zoo_awget_children_@2874: Sending request xid=0x53801b14 for path [/aliases.json] to 127.0.0.1:2181 2014-05-24 08:07:53,471:20540(0x7f893209a700):ZOO_DEBUG@process_sync_completion@1870: Processing sync_completion with type=3 xid=0x53801b14 rc=0 8613 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeChildren(std::vector<std::basic_string<char> >&, const string&, zk::ZKEventListener*, void*)::Trace::~Trace() 0x434ede Exit 8613 [0x7f8913fff700] DEBUG zkfuse null - update children done 8613 [0x7f8913fff700] DEBUG zkfuse null - node first use or reuse 8613 [0x7f8913fff700] DEBUG zkfuse null - update data 8613 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeData(const string&, zk::ZKEventListener*, void*, Stat*)::Trace::Trace(const void*) 0x434e82 Enter 8613 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::Trace(const void*) 0x434cc4 Enter 8613 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::validatePath(const string&)::Trace::~Trace() 0x434cc4 Exit 8614 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::Trace(const void*) 0x434cb3 Enter 8614 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::verifyConnection()::Trace::~Trace() 0x434cb3 Exit 2014-05-24 08:07:53,472:20540(0x7f8913fff700):ZOO_DEBUG@zoo_awget@2661: Sending request xid=0x53801b15 for path [/aliases.json] to 127.0.0.1:2181 2014-05-24 08:07:53,472:20540(0x7f893209a700):ZOO_DEBUG@process_sync_completion@1870: Processing sync_completion with type=2 xid=0x53801b15 rc=0 8614 [0x7f8913fff700] TRACE zookeeper.adapter null - zk::ZooKeeperAdapter::getNodeData(const string&, zk::ZKEventListener*, void*, Stat*)::Trace::~Trace() 0x434e82 Exit terminate called after throwing an instance of 'std::length_error' what(): basic_string::_S_create |
394689 | No Perforce job exists for this issue. | 1 | 394825 | 4 years, 17 weeks, 5 days ago | Fix a bug in zkfuse that causes an abort upon reading a node's content | 0|i1vyfr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1928 | add configurable throttling to the number of snapshots concurrently sent by a leader |
Improvement | Resolved | Major | Fixed | Edward Carter | Edward Carter | Edward Carter | 21/May/14 20:46 | 10/Jun/14 07:16 | 09/Jun/14 18:19 | server | 0 | 6 | 604800 | 604800 | 0% | We want to add configurable throttling to the number of snapshots concurrently sent by a leader. Without this, when recovering from a partial outage or network partition, the leader can become overloaded and unresponsive due to its attempts to send snapshots to too many followers and observers all at once. The throttle will operate by terminating the connection of any observer receiving a snapshot deemed to be in excess of the throttle. Followers should be allowed to receive snapshots unconditionally, though those snapshots do count against the quota. I have a patch ready which implements this. |
0% | 0% | 604800 | 604800 | 394186 | No Perforce job exists for this issue. | 3 | 394324 | 5 years, 41 weeks, 2 days ago | 0|i1vvdz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1927 | zkServer.sh fails to read dataDir (and others) from zoo.cfg on Solaris 10 (grep issue, manifests as FAILED TO WRITE PID). |
Bug | Closed | Major | Fixed | Chris Nauroth | Ed Schmed | Ed Schmed | 10/May/14 14:06 | 07/Sep/16 23:58 | 25/Aug/15 01:14 | 3.4.6 | 3.4.7, 3.5.3, 3.6.0 | scripts | 0 | 8 | ZOOKEEPER-2042, ZOOKEEPER-2078 | Solaris 5.10 | Fails to write PID file with a permissions error, because the startup script fails to read the dataDir variable from zoo.cfg, and then tries to use the drive root ( / ) as the data dir. Tracked the problem down to line 84 of zkServer.sh: ZOO_DATADIR="$(grep "^[[:space:]]*dataDir" "$ZOOCFG" | sed -e 's/.*=//')" If i run just that line and point it right at the config file, ZOO_DATADIR is empty. If I remove [[:space:]]* from the grep: ZOO_DATADIR="$(grep "^dataDir" "$ZOOCFG" | sed -e 's/.*=//')" Then it works fine. (If I also make the same change on line 164 and 169) My regex skills are pretty bad, so I'm afraid to comment on why [[space]]* needs to be in there? |
391785 | No Perforce job exists for this issue. | 3 | 391988 | 3 years, 28 weeks ago | 0|i1vh2v: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1926 | Unit tests should only use build/test/data for data |
Bug | Resolved | Major | Fixed | Enis Soztutar | Enis Soztutar | Enis Soztutar | 08/May/14 18:34 | 20/May/14 07:09 | 09/May/14 17:40 | 3.4.7, 3.5.0 | tests | 0 | 4 | Some of the unit tests are creating temp files under system tmp dir (/tmp), and put data there. We should encapsulate all temporary data from unit tests under build/test/data. ant clean will clean all data from previous runs. |
391521 | No Perforce job exists for this issue. | 2 | 391734 | 5 years, 44 weeks, 2 days ago | 0|i1vfin: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1925 | Time to Live or auto expiration of zookeeper node |
New Feature | Resolved | Major | Duplicate | Unassigned | Omkar Vinit Joshi | Omkar Vinit Joshi | 08/May/14 15:53 | 19/Dec/19 18:01 | 02/Dec/16 19:48 | 2 | 9 | ZOOKEEPER-2169 | Today whenever a znode is created; it stays there for ever. We all know that there is a limitation in terms of how many nodes a system can handle. It would be nice to have a way to specify expiry time for every znode thereby stale zondes are cleandup automatically after sufficiently large time. any thoughts? | ttl_nodes | 391485 | No Perforce job exists for this issue. | 0 | 391699 | 4 years, 35 weeks, 3 days ago | 0|i1vfav: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1924 | \,/ 'n;kj kkln. l, |
Bug | Resolved | Major | Invalid | Unassigned | Shwetha GS | Shwetha GS | 07/May/14 08:28 | 07/May/14 08:28 | 07/May/14 08:28 | 0 | 1 | 391136 | No Perforce job exists for this issue. | 0 | 391357 | 5 years, 46 weeks, 1 day ago | 0|i1vd73: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1923 | A typo in zookeeperStarted document |
Bug | Resolved | Minor | Fixed | Chengwei Yang | Chengwei Yang | Chengwei Yang | 05/May/14 09:49 | 07/Aug/17 05:02 | 08/May/14 17:47 | 3.4.6 | 3.5.0 | documentation | 0 | 7 | The trunk branch | There is a typo in the document zookeeperStarted.*, see http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html, in the section *Connecting to ZooKeeper*, where the *help* output *createpath* which should be *create path*. | 390627 | No Perforce job exists for this issue. | 1 | 390873 | 2 years, 32 weeks, 3 days ago | 0|i1va87: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1922 | Negative min latency value |
Bug | Open | Minor | Unresolved | Unassigned | J Potter | J Potter | 05/May/14 08:21 | 02/Jul/18 08:34 | 3.4.6 | server | 1 | 6 | We're seeing the output of stat on one node return a negative value for min latency time: stat Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 GMT Clients: ... Latency min/avg/max: -477/149/261002 (The max value seems suspicious, too.) Figured I'd report this, as I don't see any mention of it online or in other bug reports. Maybe negative values shouldn't be recorded? |
390614 | No Perforce job exists for this issue. | 0 | 390860 | 1 year, 37 weeks, 3 days ago | 0|i1va5b: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1921 | Zk binding to 127.0.0.1 for quorum connections |
Bug | Resolved | Major | Not A Problem | Unassigned | Nathan Neulinger | Nathan Neulinger | 04/May/14 11:28 | 23/Dec/14 15:57 | 23/Dec/14 15:57 | 3.4.6 | quorum | 0 | 3 | I did not see this behavior with 3.4.5. When I upgraded to 3.4.6, couldn't establish quorum any more - what I found was that the listener on :3888 was only running on 127.0.0.1. I was able to work around it by forcing dns to get the external hostnames, or removing 'my' hostname from /etc/hosts. Key point - this appears to be a significant change in behavior from 3.4.5 - which I did not have any problems with... I know you can specify the clientPortAddress - is there any way in the configuration to specify which address should be used for quorum connection listeners - or to force it to listen on 0.0.0.0 for quorum connections? |
390512 | No Perforce job exists for this issue. | 0 | 390765 | 5 years, 13 weeks, 2 days ago | 0|i1v9k7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1920 | Login thread is not shutdown when close the ClientCnxn |
Bug | Open | Minor | Unresolved | Rakesh Radhakrishnan | liuyang | liuyang | 27/Apr/14 22:59 | 30/Apr/15 07:26 | 3.4.6 | java client | 0 | 7 | ZOOKEEPER-938 | A new ZooKeeper client will start three threads, the SendThread, EventThread and LoginThread. I belive that these threads will be shutdown after call the zk.close. I test that the SendThread and EventThread will be die, but LoginThread is still alive. The stack is: "Thread-0" daemon prio=10 tid=0x00007ffcf0020000 nid=0x69c8 waiting on condition [0x00007ffd3cc25000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.Login$1.run(Login.java:183) at java.lang.Thread.run(Thread.java:744) |
389164 | No Perforce job exists for this issue. | 0 | 389410 | 4 years, 47 weeks ago | 0|i1v193: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1919 | Update the C implementation of removeWatches to have it match ZOOKEEPER-1910 |
Bug | Closed | Blocker | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 24/Apr/14 22:44 | 20/May/19 13:50 | 29/May/18 19:29 | 3.5.0 | 3.6.0, 3.5.5 | c client | 0 | 5 | 0 | 7200 | ZOOKEEPER-442, ZOOKEEPER-1914, ZOOKEEPER-3053, ZOOKEEPER-1910, ZOOKEEPER-2320, ZOOKEEPER-1887 | See https://issues.apache.org/jira/browse/ZOOKEEPER-1910 | 100% | 100% | 7200 | 0 | pull-request-available, remove_watches | 388802 | No Perforce job exists for this issue. | 2 | 389050 | 1 year, 42 weeks, 1 day ago | 0|i1uz1j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1918 | Add 64 bit Windows as a supported development platform |
Task | Resolved | Minor | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 24/Apr/14 16:04 | 03/Jul/14 07:23 | 02/Jul/14 23:57 | 3.5.0 | documentation | 0 | 5 | Change this line: Win32 is supported as a development platform only for both server and client. to: Win32 and Win64 are supported as a development platform only for both server and client. http://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html#sc_supportedPlatforms |
388727 | No Perforce job exists for this issue. | 1 | 388977 | 5 years, 38 weeks ago |
Reviewed
|
0|i1uylr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1917 | Apache Zookeeper logs cleartext admin passwords |
Bug | Resolved | Blocker | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 22/Apr/14 14:02 | 13/Oct/14 00:22 | 12/Oct/14 23:16 | 3.4.7, 3.5.1, 3.6.0 | 0 | 6 | Check the CVE entry for a description: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-0085 |
388154 | No Perforce job exists for this issue. | 3 | 388411 | 5 years, 23 weeks, 3 days ago | 0|i1uv4f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1916 | make Cli depend on zookeeper |
Improvement | Patch Available | Minor | Unresolved | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 19/Apr/14 19:56 | 14/Dec/19 06:07 | 3.7.0 | c client | 0 | 1 | windows visual c++ 2008 | Currently building the solution fails because the dependency is not properly set. | 387749 | No Perforce job exists for this issue. | 1 | 388010 | 5 years, 48 weeks, 4 days ago | 0|i1usnr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1915 | Use $(ProjectDir) macro to specify include directories |
Improvement | In Progress | Minor | Unresolved | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 19/Apr/14 19:55 | 05/Feb/20 07:11 | 3.7.0, 3.5.8 | c client | 0 | 1 | windows visual c++ 2008 | Right now we need to explicitly set the ZOOKEEPER_HOME environment variable. | 387748 | No Perforce job exists for this issue. | 1 | 388009 | 5 years, 48 weeks, 5 days ago | 0|i1usnj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1914 | TestWatchers.cc failure |
Bug | Open | Major | Unresolved | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 18/Apr/14 18:51 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | c client | 0 | 4 | ZOOKEEPER-1919 | https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2051/console [exec] [exec] /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/src/c/tests/TestWatchers.cc:667: Assertion: assertion failed [Expression: ensureCondition( deliveryTracker.deliveryCounterEquals(2),1000)<1000] |
387686 | No Perforce job exists for this issue. | 2 | 387947 | 5 years, 35 weeks, 6 days ago | 0|i1us9r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1913 | Invalid manifest files due to bogus revision property value |
Bug | Resolved | Major | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 18/Apr/14 16:42 | 19/Apr/14 07:06 | 18/Apr/14 18:35 | 3.4.6, 3.5.0 | 3.4.7, 3.5.0 | build | 0 | 6 | Without the proposed patch, I get invalid manifests because stderr is added to the revision property. I think this might be something specific to my setup though: {noformat} $ java -version Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=utf8 java version "1.7.0_51" OpenJDK Runtime Environment (fedora-2.4.5.1.fc20-x86_64 u51-b31) OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode) {noformat} since it doesn't seem happen with older java/ant combinations. Nonetheless, it seems like the right thing is to explicitly ignore stderr. |
387663 | No Perforce job exists for this issue. | 1 | 387925 | 5 years, 48 weeks, 5 days ago | 0|i1us47: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1912 | Leader election lets 2 leaders happen |
Bug | Resolved | Critical | Not A Problem | Flavio Paiva Junqueira | Tanakorn Leesatapornwongsa | Tanakorn Leesatapornwongsa | 14/Apr/14 16:14 | 06/Feb/16 23:13 | 06/Feb/16 23:13 | 3.4.6 | leaderElection | 0 | 4 | Ubuntu 12.04, OpenJDK 1.6 | In 3-node cluster, when there are 2 nodes die and reboot during leader election, it might lead to the case that there are 2 leaders happen in the system. Eventually, a leader that does not has follower supports and quit being leader, but it makes us lose some availability. I am building a tools that can reorder messages and disk write, and also inject node crash to the system and found this bug. These are the step of events that my tools execute in sequence that lead to 2 leaders at the end. My zookeeper nodes have id = 0,1,2 packetsend from=0 to=1 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=0 to=2 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=2 to=0 state=0 leader=2 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=2 to=1 state=0 leader=2 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=1 to=0 state=0 leader=1 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=1 to=2 state=0 leader=1 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=1 to=0 state=0 leader=2 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=0 to=1 state=0 leader=2 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=1 to=2 state=0 leader=2 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=0 to=2 state=0 leader=2 zxid=0 electionEpoch=1 peerEpoch=0 diskwrite nodeId=0 write=currentEpoch nodecrash id=0 nodecrash id=1 nodestart id=0 nodestart id=1 diskwrite nodeId=2 write=currentEpoch packetsend from=2 to=0 state=0 leader=2 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=0 to=2 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=2 to=1 state=0 leader=2 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=0 to=1 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=1 to=0 state=0 leader=1 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=1 to=2 state=0 leader=1 zxid=0 electionEpoch=1 peerEpoch=0 packetsend from=2 to=0 state=2 leader=2 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=1 to=0 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=2 to=1 state=2 leader=2 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=1 to=2 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=2 to=1 state=2 leader=2 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=1 to=0 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=1 to=2 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=0 to=1 state=2 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=2 to=1 state=2 leader=2 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=1 to=0 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=1 to=2 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=0 to=1 state=2 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=2 to=1 state=2 leader=2 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=1 to=0 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=1 to=2 state=0 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=0 to=1 state=2 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=2 to=0 state=0 leader=2 zxid=0 electionEpoch=2 peerEpoch=1 packetsend from=2 to=1 state=0 leader=2 zxid=0 electionEpoch=2 peerEpoch=1 packetsend from=0 to=2 state=2 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=2 to=0 state=0 leader=2 zxid=0 electionEpoch=2 peerEpoch=1 packetsend from=1 to=0 state=0 leader=2 zxid=0 electionEpoch=2 peerEpoch=1 packetsend from=1 to=2 state=0 leader=2 zxid=0 electionEpoch=2 peerEpoch=1 packetsend from=2 to=1 state=0 leader=2 zxid=0 electionEpoch=2 peerEpoch=1 packetsend from=0 to=2 state=2 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=2 to=0 state=0 leader=2 zxid=0 electionEpoch=2 peerEpoch=1 packetsend from=0 to=1 state=2 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 packetsend from=2 to=1 state=0 leader=2 zxid=0 electionEpoch=2 peerEpoch=1 packetsend from=0 to=2 state=2 leader=0 zxid=0 electionEpoch=1 peerEpoch=1 diskwrite nodeId=2 write=currentEpoch diskwrite nodeId=1 write=currentEpoch |
386687 | No Perforce job exists for this issue. | 2 | 386951 | 4 years, 6 weeks, 5 days ago | 0|i1um47: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1911 | REST contrib module does not include all required files when packaged |
Bug | Resolved | Major | Fixed | Sean Mackrory | Sean Mackrory | Sean Mackrory | 11/Apr/14 11:08 | 25/Apr/14 07:49 | 25/Apr/14 04:12 | 3.4.6, 3.5.0 | 3.4.7, 3.5.0 | 0 | 5 | If you compile the REST contrib module, the tarball will only include the main JAR. It will not bundle the required dependencies or include the minimal working configuration files. | 386287 | No Perforce job exists for this issue. | 2 | 386552 | 5 years, 47 weeks, 6 days ago | 0|i1ujnj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1910 | RemoveWatches wrongly removes the watcher if multiple watches exists on a path |
Bug | Resolved | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 11/Apr/14 04:08 | 16/Dec/18 09:46 | 28/Apr/14 22:31 | 3.5.0 | java client, server | 0 | 7 | ZOOKEEPER-442, ZOOKEEPER-2586, ZOOKEEPER-1919 | Consider a case where zkclient has added 2 data watchers(say 'w1' and 'w2') on '/node1'. Now user has removed w1, but this is deleting the 'CnxnWatcher' in ZK server against the "/node1" path. This will affect other data watchers(if any) of same client on same path. In our case 'w2' would not be notified. Note: please see the attached test case to understand more. |
remove_watches | 386218 | No Perforce job exists for this issue. | 4 | 386483 | 1 year, 43 weeks, 1 day ago | 0|i1uj87: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1909 | removeWatches doesn't return NOWATCHER when there is no watch set |
Bug | Resolved | Major | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 09/Apr/14 19:51 | 16/Dec/18 09:47 | 17/Apr/14 02:52 | 3.5.0 | 3.5.0 | server | 0 | 5 | ZOOKEEPER-442 introduced support for a new opcode: removeWatches. The way it was implemented though, implies that you need to check on the client side if a watch/watcher is set *before* you send your request to the server. If you don't, ZK will just swallow your request and won't return an error code if there isn't a watch set for that path. I noticed this whilst implementing removeWatches for Kazoo [1]. As mentioned, I guess it could be expected that clients should do the check on their side but I think that the correct thing would to have the server do the validation and return the error code accordingly as well. [~rakeshr], [~phunt]: thoughts? [1] https://github.com/rgs1/kazoo/commit/44ca48e975aeea3fd0664fe13136a72caf89e54f |
remove_watches | 385925 | No Perforce job exists for this issue. | 4 | 386189 | 5 years, 49 weeks ago | 0|i1uhfb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1908 | setAcl should be have a recursive function |
Improvement | Closed | Major | Fixed | Reid Chan | Kevin Odell | Kevin Odell | 01/Apr/14 18:09 | 04/Oct/19 10:55 | 12/Oct/18 11:08 | 3.4.6, 4.0.0 | 3.6.0, 3.5.5 | scripts, server | 2 | 8 | 0 | 22800 | setAcl should be have a recursive function. This becomes a problem with HBase when trying to back in and out of secure clusters. | 100% | 100% | 22800 | 0 | pull-request-available | 384374 | No Perforce job exists for this issue. | 0 | 384642 | 1 year, 22 weeks, 6 days ago | 0|i1u7wn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1907 | Improve Thread handling |
Improvement | Resolved | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 31/Mar/14 11:18 | 06/Dec/16 22:03 | 17/Dec/15 04:13 | 3.5.0 | 3.4.7, 3.5.1, 3.6.0 | server | 0 | 8 | ZOOKEEPER-602, ZOOKEEPER-2029, ZOOKEEPER-2247, ZOOKEEPER-207 | Server has many critical threads running and co-ordinating each other like RequestProcessor chains et. When going through each threads, most of them having the similar structure like: {code} public void run() { try { while(running) // processing logic } } catch (InterruptedException e) { LOG.error("Unexpected interruption", e); } catch (Exception e) { LOG.error("Unexpected exception", e); } LOG.info("...exited loop!"); } {code} From the design I could see, there could be a chance of silently leaving the thread by swallowing the exception. If this happens in the production, the server would get hanged forever and would not be able to deliver its role. Now its hard for the management tool to detect this. The idea of this JIRA is to discuss and imprv. Reference: [Community discussion thread|http://mail-archives.apache.org/mod_mbox/zookeeper-user/201403.mbox/%3CC2496325850AA74C92AAF83AA9662D26458A1D67@szxeml561-mbx.china.huawei.com%3E] |
384013 | No Perforce job exists for this issue. | 14 | 384281 | 3 years, 15 weeks, 1 day ago | 0|i1u5on: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1906 | zkpython: invalid data in GetData for empty node |
Bug | Resolved | Major | Fixed | Nikita Vetoshkin | Nikita Vetoshkin | Nikita Vetoshkin | 31/Mar/14 03:52 | 02/Apr/14 07:10 | 01/Apr/14 17:02 | 3.4.6, 3.5.0 | 3.4.7, 3.5.0 | contrib-bindings | 0 | 4 | FreeBSD | In python if we ask {{zookeeper.get}} (which translates into {{pyzoo_get}}) for empty node we can get trash in result on Python level. Issue is pretty tricky. It goes like this: * python C extension allocates buffer with malloc {{buffer = malloc(sizeof(char)*buffer_len);}} and calls {{zoo_wget}} providing both {{buffer}} and {{buffer_len}}. * deserialize_GetDataResponse deserializes empty buffer and sets {{buffer_len}} to -1 and {{zoo_wget}} returns. * python C extension calls {{Py_BuildValue( "(s#,N)", buffer,buffer_len ...}} with {{buffer_len}} set to -1. * {{Py_BuildValue}} calls {{do_mkvalue}} to build python string which falls back to {{strlen(str)}} in case string length ({{buffer_len < 0}}) - that's our case. * *usually* strlen returns 0, because e.g. linux uses magic zero filled page as result of mmap (which is being copied upon page fault, i.e. when you want to write to it) * everything works! But on FreeBSD (not always) we can get random data in {{malloc}} result and this trash will be exposed to the user. Not sure about the right way to fix this, but something like {noformat} Index: src/contrib/zkpython/src/c/zookeeper.c =================================================================== --- src/contrib/zkpython/src/c/zookeeper.c (revision 1583238) +++ src/contrib/zkpython/src/c/zookeeper.c (working copy) @@ -1223,7 +1223,7 @@ } PyObject *stat_dict = build_stat( &stat ); - PyObject *ret = Py_BuildValue( "(s#,N)", buffer,buffer_len, stat_dict ); + PyObject *ret = Py_BuildValue( "(s#,N)", buffer,buffer_len < 0 ? 0 : buffer_len, stat_dict ); free(buffer); return ret; {noformat} should do the trick |
383922 | No Perforce job exists for this issue. | 1 | 384190 | 5 years, 51 weeks, 1 day ago | 0|i1u54f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1905 | ZOOKEEPER-1833 ZKClients are hitting KeeperException$ConnectionLossException due to wrong usage pattern |
Sub-task | Resolved | Major | Won't Fix | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 25/Mar/14 09:33 | 27/Mar/14 09:53 | 27/Mar/14 09:53 | 3.4.7 | tests | 0 | 2 | ZOOKEEPER-22 | Since the ZooKeeper client connection establishment happens in async way, the client should wait for the 'KeeperState.SyncConnected' event before start performing any ops. Many tests are having this kind of wrong pattern. Reference:- Below stack trace taken from build https://builds.apache.org/job/ZooKeeper-3.4-WinVS2008_java/465/ {code} [junit] 2014-03-19 08:36:53,056 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@62] - TEST METHOD FAILED testChecksums [junit] org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /crctest- 942 [junit] at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) [junit] at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) [junit] at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) [junit] at org.apache.zookeeper.server.CRCTest.testChecksums(CRCTest.java:127) {code} |
381797 | No Perforce job exists for this issue. | 0 | 382072 | 6 years ago | 0|i1ts2f: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1904 | ZOOKEEPER-1833 WatcherTest#testWatchAutoResetWithPending is failing |
Sub-task | Resolved | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 25/Mar/14 04:05 | 27/Mar/14 07:10 | 26/Mar/14 17:17 | 3.4.7, 3.5.0 | tests | 0 | 4 | Following is the stacktrace taken from [Build : ZooKeeper-3.4-WinVS2008_java/465|https://builds.apache.org/job/ZooKeeper-3.4-WinVS2008_java/465/] {code} [junit] 2014-03-19 09:28:50,020 [myid:] - INFO [main-SendThread(127.0.0.1:11278):ClientCnxn$SendThread@975] - Opening socket connection to server 127.0.0.1/127.0.0.1:11278. Will not attempt to authenticate using SASL (unknown error) [junit] 2014-03-19 09:28:51,025 [myid:] - WARN [main-SendThread(127.0.0.1:11278):ClientCnxn$SendThread@1102] - Session 0x144d9ab1f9e0000 for server null, unexpected error, closing socket connection and attempting reconnect [junit] java.net.ConnectException: Connection refused: no further information [junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) [junit] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) [junit] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) [junit] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) [junit] 2014-03-19 09:28:52,661 [myid:] - INFO [main-SendThread(127.0.0.1:11278):ClientCnxn$SendThread@975] - Opening socket connection to server 127.0.0.1/127.0.0.1:11278. Will not attempt to authenticate using SASL (unknown error) [junit] 2014-03-19 09:28:53,640 [myid:] - WARN [main-SendThread(127.0.0.1:11278):ClientCnxn$SendThread@1102] - Session 0x144d9ab1f9e0000 for server null, unexpected error, closing socket connection and attempting reconnect [junit] java.net.ConnectException: Connection refused: no further information [junit] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) [junit] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) [junit] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) [junit] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) [junit] 2014-03-19 09:28:55,435 [myid:] - INFO [main-SendThread(127.0.0.1:11278):ClientCnxn$SendThread@975] - Opening socket connection to server 127.0.0.1/127.0.0.1:11278. Will not attempt to authenticate using SASL (unknown error) [junit] 2014-03-19 09:28:56,111 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@62] - TEST METHOD FAILED testWatchAutoResetWithPending [junit] java.util.concurrent.TimeoutException: Did not disconnect [junit] at org.apache.zookeeper.test.ClientBase$CountdownWatcher.waitForDisconnected(ClientBase.java:145) [junit] at org.apache.zookeeper.test.WatcherTest.testWatchAutoResetWithPending(WatcherTest.java:202) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} |
381756 | No Perforce job exists for this issue. | 3 | 382031 | 6 years ago | 0|i1trtb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1903 | 30/6 |
Bug | Resolved | Major | Invalid | Unassigned | ROY Assink | ROY Assink | 24/Mar/14 10:52 | 15/Apr/14 13:59 | 15/Apr/14 13:59 | 0 | 2 | 381546 | No Perforce job exists for this issue. | 0 | 381821 | 5 years, 49 weeks, 2 days ago | 0|i1tqiv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1902 | summertime |
Bug | Resolved | Major | Invalid | Unassigned | ROY Assink | ROY Assink | 24/Mar/14 10:51 | 24/Mar/14 14:11 | 24/Mar/14 14:11 | 0 | 3 | 381545 | No Perforce job exists for this issue. | 0 | 381820 | 6 years, 3 days ago | 0|i1tqin: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1901 | [JDK8] Sort children for comparison in AsyncOps tests |
Bug | Resolved | Minor | Fixed | Andrew Kyle Purtell | Andrew Kyle Purtell | Andrew Kyle Purtell | 23/Mar/14 16:08 | 25/Mar/14 12:30 | 23/Mar/14 17:04 | 3.4.6 | 3.4.7, 3.5.0 | tests | 0 | 4 | AsyncOpsTest, ChrootAsyncTest, and NioNettySuiteTest can fail running on Java 8 if child znodes are not added to a list in the same order as expected. For example {noformat} Testcase: testAsyncGetChildrenTwo took 0.166 sec FAILED expected:<OK:/foo:[child[1, child2]]> but was:<OK:/foo:[child[2, child1]]> junit.framework.AssertionFailedError: expected:<OK:/foo:[child[1, child2]]> but was:<OK:/foo:[child[2, child1]]> at org.apache.zookeeper.test.AsyncOps$AsyncCB.verify(AsyncOps.java:113) at org.apache.zookeeper.test.AsyncOps$ChildrenCB.verify(AsyncOps.java:298) at org.apache.zookeeper.test.AsyncOps$ChildrenCB.verifyGetChildrenTwo(AsyncOps.java:287) at org.apache.zookeeper.test.AsyncOpsTest.testAsyncGetChildrenTwo(AsyncOpsTest.java:155) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) {noformat} {noformat} Testcase: testAsyncGetChildren2Two took 0.154 sec FAILED expected:<OK:/foo:[child[1, child2]]> but was:<OK:/foo:[child[2, child1]]> junit.framework.AssertionFailedError: expected:<OK:/foo:[child[1, child2]]> but was:<OK:/foo:[child[2, child1]]> at org.apache.zookeeper.test.AsyncOps$AsyncCB.verify(AsyncOps.java:113) at org.apache.zookeeper.test.AsyncOps$Children2CB.verify(AsyncOps.java:383) at org.apache.zookeeper.test.AsyncOps$Children2CB.verifyGetChildrenTwo(AsyncOps.java:372) at org.apache.zookeeper.test.AsyncOpsTest.testAsyncGetChildren2Two(AsyncOpsTest.java:175) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) {noformat} This seems like a test only issue because getChildren javadoc says "The list of children returned is not sorted and no guarantee is provided as to its natural or lexical order." So, fix the tests by sorting the incoming lists. |
381434 | No Perforce job exists for this issue. | 2 | 381711 | 6 years, 3 days ago | 0|i1tpuf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1900 | NullPointerException in truncate |
Bug | Resolved | Blocker | Fixed | Camille Fournier | Steven Bower | Steven Bower | 21/Mar/14 14:11 | 01/Jul/14 07:11 | 30/Jun/14 13:20 | 3.4.5, 3.4.6 | 3.4.7, 3.5.0 | 0 | 8 | linux java 1.6 | The other day we started up a ZK instance that had been down for a bit (1day) and started getting NPEs all over the place... {noformat} 2014-20-03 11:15:42.320 INFO QuorumPeerConfig [main] - Reading configuration from: /xxx/bin/zk/etc/zk.cfg 2014-20-03 11:15:42.350 INFO QuorumPeerConfig [main] - Defaulting to majority quorums 2014-20-03 11:15:42.353 INFO DatadirCleanupManager [main] - autopurge.snapRetainCount set to 3 2014-20-03 11:15:42.353 INFO DatadirCleanupManager [main] - autopurge.purgeInterval set to 0 2014-20-03 11:15:42.353 INFO DatadirCleanupManager [main] - Purge task is not scheduled. 2014-20-03 11:15:42.385 INFO QuorumPeerMain [main] - Starting quorum peer 2014-20-03 11:15:42.399 INFO NIOServerCnxnFactory [main] - binding to port 0.0.0.0/0.0.0.0:5555 2014-20-03 11:15:42.413 INFO QuorumPeer [main] - tickTime set to 2000 2014-20-03 11:15:42.413 INFO QuorumPeer [main] - minSessionTimeout set to -1 2014-20-03 11:15:42.413 INFO QuorumPeer [main] - maxSessionTimeout set to -1 2014-20-03 11:15:42.413 INFO QuorumPeer [main] - initLimit set to 10 2014-20-03 11:15:42.456 INFO FileSnap [main] - Reading snapshot /xxx/zk_data/version-2/snapshot.2c00000000 2014-20-03 11:15:42.463 INFO QuorumCnxManager [Thread-3] - My election bind port: 0.0.0.0/0.0.0.0:7555 2014-20-03 11:15:42.470 INFO QuorumPeer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - LOOKING 2014-20-03 11:15:42.471 INFO FastLeaderElection [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - New election. My id = 3, proposed zxid=0x8000000000000000 2014-20-03 11:15:42.479 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), FOLLOWING (n.state), 1 (n.sid), 0x2b (n.peerEPoch), LOOKING (my state) 2014-20-03 11:15:42.479 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), FOLLOWING (n.state), 1 (n.sid), 0x2b (n.peerEPoch), LOOKING (my state) 2014-20-03 11:15:42.482 INFO QuorumCnxManager [WorkerSender[myid=3]] - Have smaller server identifier, so dropping the connection: (5, 3) 2014-20-03 11:15:42.482 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), LEADING (n.state), 2 (n.sid), 0x2b (n.peerEPoch), LOOKING (my state) 2014-20-03 11:15:42.482 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), LEADING (n.state), 2 (n.sid), 0x2b (n.peerEPoch), LOOKING (my state) 2014-20-03 11:15:42.482 INFO QuorumPeer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - OBSERVING 2014-20-03 11:15:42.486 INFO Learner [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - TCP NoDelay set to: true 2014-20-03 11:15:42.488 INFO QuorumCnxManager [host1/###.###.###.###:7555] - Received connection request /###.###.###.###:64528 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:host.name=host1 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:java.version=1.6.0_20 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:java.vendor=Sun Microsystems Inc. 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:java.home=/xxx/util/common/jdk1.6.0_20_64bit/jre 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:java.class.path=/xxx/bin/zk/etc:/xxx/bin/zk/lib/slf4j-log4j12-1.7.2.jar:/xxx/bin/zk/lib/jline-0.9.94.jar:/xxx/bin/zk/lib/jul-to-slf4j-1.7.2.jar:/xxx/bin/zk/lib/ZooInspector-3.4.5.jar:/xxx/bin/zk/lib/jcl-over-slf4j-1.7.2.jar:/xxx/bin/zk/lib/log4j-1.2.17.jar:/xxx/bin/zk/lib/zookeeper-3.4.5.jar:/xxx/bin/zk/lib/slf4j-api-1.7.2.jar:/xxx/bin/zk/lib/netty-3.2.2.Final.jar 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:java.library.path=/xxx/util/common/jdk1.6.0_20_64bit/jre/lib/amd64/server:/xxx/util/common/jdk1.6.0_20_64bit/jre/lib/amd64:/xxx/util/common/jdk1.6.0_20_64bit/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:java.io.tmpdir=/tmp 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:java.compiler=<NA> 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:os.name=Linux 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:os.arch=amd64 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:os.version=2.6.32-220.2.1.el6.x86_64 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:user.name=op 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:user.home=/xxx/bin 2014-20-03 11:15:42.490 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Server environment:user.dir=/xxx/bin 2014-20-03 11:15:42.491 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /xxx/zk_log/version-2 snapdir /xxx/zk_data/version-2 2014-20-03 11:15:42.493 INFO Learner [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Observing host4/###.###.###.###:6555 2014-20-03 11:15:42.495 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), FOLLOWING (n.state), 5 (n.sid), 0x2b (n.peerEPoch), OBSERVING (my state) 2014-20-03 11:15:42.498 WARN Learner [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Truncating log to get in sync with the leader 0x2b00000002 2014-20-03 11:15:42.499 WARN QuorumPeer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Unexpected exception java.lang.NullPointerException at org.apache.zookeeper.server.persistence.FileTxnLog.truncate(FileTxnLog.java:352) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.truncateLog(FileTxnSnapLog.java:259) at org.apache.zookeeper.server.ZKDatabase.truncateLog(ZKDatabase.java:438) at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:339) at org.apache.zookeeper.server.quorum.Observer.observeLeader(Observer.java:72) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:727) 2014-20-03 11:15:42.500 INFO Learner [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - shutdown called java.lang.Exception: shutdown Observer at org.apache.zookeeper.server.quorum.Observer.shutdown(Observer.java:137) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:731) 2014-20-03 11:15:42.500 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - shutting down 2014-20-03 11:15:42.500 INFO QuorumPeer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - LOOKING 2014-20-03 11:15:42.501 INFO FastLeaderElection [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - New election. My id = 3, proposed zxid=0x8000000000000000 2014-20-03 11:15:42.503 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), FOLLOWING (n.state), 1 (n.sid), 0x2b (n.peerEPoch), LOOKING (my state) 2014-20-03 11:15:42.503 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), LEADING (n.state), 2 (n.sid), 0x2b (n.peerEPoch), LOOKING (my state) 2014-20-03 11:15:42.503 INFO QuorumPeer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - OBSERVING 2014-20-03 11:15:42.503 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /xxx/zk_log/version-2 snapdir /xxx/zk_data/version-2 2014-20-03 11:15:42.504 INFO Learner [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Observing host4/###.###.###.###:6555 2014-20-03 11:15:42.504 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), FOLLOWING (n.state), 5 (n.sid), 0x2b (n.peerEPoch), OBSERVING (my state) 2014-20-03 11:15:42.514 INFO FileSnap [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Reading snapshot /xxx/zk_data/version-2/snapshot.2c00000000 2014-20-03 11:15:42.517 WARN Learner [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Truncating log to get in sync with the leader 0x2b00000002 2014-20-03 11:15:42.518 WARN QuorumPeer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Unexpected exception java.lang.NullPointerException at org.apache.zookeeper.server.persistence.FileTxnLog.truncate(FileTxnLog.java:352) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.truncateLog(FileTxnSnapLog.java:259) at org.apache.zookeeper.server.ZKDatabase.truncateLog(ZKDatabase.java:438) at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:339) at org.apache.zookeeper.server.quorum.Observer.observeLeader(Observer.java:72) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:727) {noformat} This exception went on and on over and over again (more than 1M times in a day) until it then began spewing this exception: {noformat} 2014-20-03 13:45:32.843 INFO QuorumPeer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - LOOKING 2014-20-03 13:45:32.843 INFO FastLeaderElection [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - New election. My id = 3, proposed zxid=0x8000000000000000 2014-20-03 13:45:32.844 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), FOLLOWING (n.state), 1 (n.sid), 0x2b (n.peerEPoch), LOOKING (my state) 2014-20-03 13:45:32.845 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), LEADING (n.state), 2 (n.sid), 0x2b (n.peerEPoch), LOOKING (my state) 2014-20-03 13:45:32.845 INFO QuorumPeer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - OBSERVING 2014-20-03 13:45:32.845 INFO FastLeaderElection [WorkerReceiver[myid=3]] - Notification: 2 (n.leader), 0x2b00000002 (n.zxid), 0x2c (n.round), FOLLOWING (n.state), 5 (n.sid), 0x2b (n.peerEPoch), OBSERVING (my state) 2014-20-03 13:45:32.845 INFO ZooKeeperServer [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /xxx/zk_log/version-2 snapdir /xxx/zk_data/version-2 2014-20-03 13:45:32.845 INFO Learner [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Observing host4/###.###.###.###:6555 2014-20-03 13:45:32.853 WARN Learner [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Unexpected exception, tries=0, connecting to host4/###.###.###.###:6555 java.net.ConnectException: Cannot assign requested address at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:224) at org.apache.zookeeper.server.quorum.Observer.observeLeader(Observer.java:69) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:727) 2014-20-03 13:45:33.863 INFO FileSnap [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:5555] - Reading snapshot /xxx/zk_data/version-2/snapshot.2c00000000 {noformat} This exception for a while was interspersed with the NPEs but eventually it just was spewing the ConnectionException. Looking through the code a bit it seems if the FileTxnIterator when initialized cannot find any log files the {{inputStream}} is set to null which causes truncate() to NPE.. I see in 3.4.6 this has been wrapped in a try/finally which closes the iterator.. but i presume that this issue would still remain. Looking at the system in this state there were 29k+ sockets in CLOSE_WAIT state on the system and looking at a heap dump there were tons of Socket objects waiting for GC (ie not getting properly closed).. this eventually ran the system out of ephemeral ports and hence the ConnectionExceptions.. It would seem that a quick check of {{itr.next()}} prior to attempting truncation would resolve the NPE, but it seems somewhere a connection is not being closed properly when an exception occurs. |
381270 | No Perforce job exists for this issue. | 4 | 381546 | 5 years, 38 weeks, 2 days ago |
Reviewed
|
0|i1tou7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1899 | zookeeper-cli does not use STDERR and STDOUT correctly to output information |
Bug | Open | Major | Unresolved | Unassigned | Srinath Mantripragada | Srinath Mantripragada | 20/Mar/14 11:01 | 20/Mar/14 12:43 | 3.4.5 | 0 | 2 | Centos 6, Claudera Hadoop 5 | When running a zookeeper-cli, any error or logging information should go to STDERR and the result(s) of the command to STDOUT. For example, let's take the unix 'ls' command: Unix, STDERR is redirected to '/dev/null' and no results are shown: {code} $ ls /aa 2> /dev/null {code} zookeeper-cli, everything goes to STDOUT where only the last line should: {code} zookeeper-client ls / 2> /dev/null Connecting to localhost:2181 2014-03-20 14:53:12,220 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.5-cdh5.0.0-beta-2--1, built on 02/07/2014 18:28 GMT 2014-03-20 14:53:12,227 [myid:] - INFO [main:Environment@100] - Client environment:host.name=node1 2014-03-20 14:53:12,228 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_51 2014-03-20 14:53:12,229 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation 2014-03-20 14:53:12,230 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51.x86_64/jre 2014-03-20 14:53:12,231 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/lib/zookeeper/bin/../build/classes:/usr/lib/zookeeper/bin/../build/lib/*.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/netty-3.2.2.Final.jar:/usr/lib/zookeeper/bin/../lib/log4j-1.2.15.jar:/usr/lib/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/lib/zookeeper/bin/../zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/zookeeper/bin/../src/java/lib/*.jar:/etc/zookeeper/conf::/etc/zookeeper/conf:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/zookeeper/lib/slf4j-log4j12.jar:/usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/lib/log4j-1.2.15.jar:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/lib/jline-0.9.94.jar:/usr/lib/zookeeper/lib/netty-3.2.2.Final.jar 2014-03-20 14:53:12,232 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2014-03-20 14:53:12,233 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2014-03-20 14:53:12,234 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA> 2014-03-20 14:53:12,235 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux 2014-03-20 14:53:12,235 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64 2014-03-20 14:53:12,236 [myid:] - INFO [main:Environment@100] - Client environment:os.version=2.6.32-431.3.1.el6.x86_64 2014-03-20 14:53:12,237 [myid:] - INFO [main:Environment@100] - Client environment:user.name=hdfs 2014-03-20 14:53:12,238 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/var/lib/hadoop-hdfs 2014-03-20 14:53:12,239 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/var/lib/hadoop-hdfs 2014-03-20 14:53:12,242 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@5220c1b 2014-03-20 14:53:12,294 [myid:] - INFO [main-SendThread(localhost.localdomain:2181):ClientCnxn$SendThread@966] - Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2014-03-20 14:53:12,305 [myid:] - INFO [main-SendThread(localhost.localdomain:2181):ClientCnxn$SendThread@849] - Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session 2014-03-20 14:53:12,319 [myid:] - INFO [main-SendThread(localhost.localdomain:2181):ClientCnxn$SendThread@1207] - Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x144dbe27e1b001d, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [hadoop-ha, zookeeper] {code} For the get command STDOUT and STDERR are inverted: Results going to STDERR: {code} $ zookeeper-client get /hadoop-ha/Redlabnet 1>/dev/null cZxid = 0x300000027 ctime = Thu Mar 20 14:25:17 UTC 2014 mZxid = 0x300000027 mtime = Thu Mar 20 14:25:17 UTC 2014 pZxid = 0x300000027 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 0 numChildren = 0 {code} Logs/Errors going to STDOUT: {code} $ zookeeper-client get /hadoop-ha/Redlabnet 2>/dev/null Connecting to localhost:2181 2014-03-20 15:01:22,170 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.5-cdh5.0.0-beta-2--1, built on 02/07/2014 18:28 GMT 2014-03-20 15:01:22,177 [myid:] - INFO [main:Environment@100] - Client environment:host.name=ip-172-17-0-105.redlabnet.internal 2014-03-20 15:01:22,178 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_51 2014-03-20 15:01:22,179 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation 2014-03-20 15:01:22,180 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51.x86_64/jre 2014-03-20 15:01:22,181 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/lib/zookeeper/bin/../build/classes:/usr/lib/zookeeper/bin/../build/lib/*.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/netty-3.2.2.Final.jar:/usr/lib/zookeeper/bin/../lib/log4j-1.2.15.jar:/usr/lib/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/lib/zookeeper/bin/../zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/zookeeper/bin/../src/java/lib/*.jar:/etc/zookeeper/conf::/etc/zookeeper/conf:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/zookeeper/lib/slf4j-log4j12.jar:/usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/lib/log4j-1.2.15.jar:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/lib/jline-0.9.94.jar:/usr/lib/zookeeper/lib/netty-3.2.2.Final.jar 2014-03-20 15:01:22,182 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2014-03-20 15:01:22,182 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2014-03-20 15:01:22,183 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA> 2014-03-20 15:01:22,184 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux 2014-03-20 15:01:22,185 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64 2014-03-20 15:01:22,186 [myid:] - INFO [main:Environment@100] - Client environment:os.version=2.6.32-431.3.1.el6.x86_64 2014-03-20 15:01:22,186 [myid:] - INFO [main:Environment@100] - Client environment:user.name=hdfs 2014-03-20 15:01:22,187 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/var/lib/hadoop-hdfs 2014-03-20 15:01:22,188 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/var/lib/hadoop-hdfs 2014-03-20 15:01:22,191 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@4bb1c978 2014-03-20 15:01:22,242 [myid:] - INFO [main-SendThread(localhost.localdomain:2181):ClientCnxn$SendThread@966] - Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2014-03-20 15:01:22,266 [myid:] - INFO [main-SendThread(localhost.localdomain:2181):ClientCnxn$SendThread@849] - Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session 2014-03-20 15:01:22,284 [myid:] - INFO [main-SendThread(localhost.localdomain:2181):ClientCnxn$SendThread@1207] - Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x144dbe27e1b001f, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null {code} |
380994 | No Perforce job exists for this issue. | 0 | 381272 | 6 years, 1 week ago | 0|i1tn5r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1898 | ZooKeeper Java cli shell always returns "0" as exit code |
Bug | Closed | Critical | Fixed | Abraham Fine | Srinath Mantripragada | Srinath Mantripragada | 20/Mar/14 10:45 | 17/May/17 23:43 | 26/Jul/16 19:36 | 3.4.5 | 3.5.3, 3.6.0 | java client | 0 | 7 | ZOOKEEPER-2074 | CentOS 6, Claudera Hadoop 5 | zookeeper-cli always return "0" as exit code whether the command has been successful or not. Ex: Unsuccessful: {code} -bash-4.1$ zookeeper-client aa Connecting to localhost:2181 2014-03-20 14:43:01,361 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.5-cdh5.0.0-beta-2--1, built on 02/07/2014 18:28 GMT 2014-03-20 14:43:01,368 [myid:] - INFO [main:Environment@100] - Client environment:host.name=ip-172-17-0-105.redlabnet.internal 2014-03-20 14:43:01,369 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_51 2014-03-20 14:43:01,370 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation 2014-03-20 14:43:01,371 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51.x86_64/jre 2014-03-20 14:43:01,371 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/lib/zookeeper/bin/../build/classes:/usr/lib/zookeeper/bin/../build/lib/*.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/netty-3.2.2.Final.jar:/usr/lib/zookeeper/bin/../lib/log4j-1.2.15.jar:/usr/lib/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/lib/zookeeper/bin/../zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/zookeeper/bin/../src/java/lib/*.jar:/etc/zookeeper/conf::/etc/zookeeper/conf:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/zookeeper/lib/slf4j-log4j12.jar:/usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/lib/log4j-1.2.15.jar:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/lib/jline-0.9.94.jar:/usr/lib/zookeeper/lib/netty-3.2.2.Final.jar 2014-03-20 14:43:01,372 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2014-03-20 14:43:01,373 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2014-03-20 14:43:01,374 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA> 2014-03-20 14:43:01,375 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux 2014-03-20 14:43:01,375 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64 2014-03-20 14:43:01,376 [myid:] - INFO [main:Environment@100] - Client environment:os.version=2.6.32-431.3.1.el6.x86_64 2014-03-20 14:43:01,377 [myid:] - INFO [main:Environment@100] - Client environment:user.name=hdfs 2014-03-20 14:43:01,377 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/var/lib/hadoop-hdfs 2014-03-20 14:43:01,378 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/var/lib/hadoop-hdfs 2014-03-20 14:43:01,382 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@5220c1b ZooKeeper -server host:port cmd args connect host:port get path [watch] ls path [watch] set path data [version] rmr path delquota [-n|-b] path quit printwatches on|off create [-s] [-e] path data acl stat path [watch] close ls2 path [watch] history listquota path setAcl path acl getAcl path sync path redo cmdno addauth scheme auth delete path [version] setquota -n|-b val path -bash-4.1$ echo $? 0 {code} Successful: {code} -bash-4.1$ zookeeper-client ls / Connecting to localhost:2181 2014-03-20 14:43:53,881 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.5-cdh5.0.0-beta-2--1, built on 02/07/2014 18:28 GMT 2014-03-20 14:43:53,889 [myid:] - INFO [main:Environment@100] - Client environment:host.name=ip-172-17-0-105.redlabnet.internal 2014-03-20 14:43:53,889 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_51 2014-03-20 14:43:53,890 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation 2014-03-20 14:43:53,891 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51.x86_64/jre 2014-03-20 14:43:53,892 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/lib/zookeeper/bin/../build/classes:/usr/lib/zookeeper/bin/../build/lib/*.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12.jar:/usr/lib/zookeeper/bin/../lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/bin/../lib/netty-3.2.2.Final.jar:/usr/lib/zookeeper/bin/../lib/log4j-1.2.15.jar:/usr/lib/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/lib/zookeeper/bin/../zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/zookeeper/bin/../src/java/lib/*.jar:/etc/zookeeper/conf::/etc/zookeeper/conf:/usr/lib/zookeeper/zookeeper.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.0.0-beta-2.jar:/usr/lib/zookeeper/lib/slf4j-log4j12.jar:/usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar:/usr/lib/zookeeper/lib/log4j-1.2.15.jar:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/zookeeper/lib/jline-0.9.94.jar:/usr/lib/zookeeper/lib/netty-3.2.2.Final.jar 2014-03-20 14:43:53,893 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2014-03-20 14:43:53,894 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2014-03-20 14:43:53,894 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA> 2014-03-20 14:43:53,895 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux 2014-03-20 14:43:53,896 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64 2014-03-20 14:43:53,897 [myid:] - INFO [main:Environment@100] - Client environment:os.version=2.6.32-431.3.1.el6.x86_64 2014-03-20 14:43:53,897 [myid:] - INFO [main:Environment@100] - Client environment:user.name=hdfs 2014-03-20 14:43:53,898 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/var/lib/hadoop-hdfs 2014-03-20 14:43:53,899 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/var/lib/hadoop-hdfs 2014-03-20 14:43:53,902 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@5a9e40d2 2014-03-20 14:43:53,953 [myid:] - INFO [main-SendThread(localhost.localdomain:2181):ClientCnxn$SendThread@966] - Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2014-03-20 14:43:53,963 [myid:] - INFO [main-SendThread(localhost.localdomain:2181):ClientCnxn$SendThread@849] - Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session 2014-03-20 14:43:53,977 [myid:] - INFO [main-SendThread(localhost.localdomain:2181):ClientCnxn$SendThread@1207] - Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x144dbe27e1b0013, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [hadoop-ha, zookeeper] -bash-4.1$ echo $? 0 {code} |
380988 | No Perforce job exists for this issue. | 3 | 381266 | 3 years, 34 weeks, 2 days ago |
Reviewed
|
0|i1tn4f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1897 | ZK Shell/Cli not processing commands |
Bug | Resolved | Major | Fixed | Michael Stack | Cameron Gandevia | Cameron Gandevia | 18/Mar/14 19:10 | 20/May/17 19:07 | 03/Apr/14 18:41 | 3.4.6 | 3.4.7, 3.5.0 | java client, scripts | 0 | 7 | ZOOKEEPER-2004, ZOOKEEPER-2787, ZOOKEEPER-2009, HBASE-10903, ZOOKEEPER-1535 | When running zookeeper 3.4.5 I was able to run commands using zkCli such as zkCli.sh -server 127.0.0.1:2182 ls / zkCli.sh -server 127.0.0.1:2182 get /blah After upgrading to 3.4.6 these commands no longer work. I think issue https://issues.apache.org/jira/browse/ZOOKEEPER-1535 was the reason the commands were running in previous versions. It looks like the client exits when a command is present. {code:title=ZooKeeperMain.java} void run() throws KeeperException, IOException, InterruptedException { if (cl.getCommand() == null) { System.out.println("Welcome to ZooKeeper!"); boolean jlinemissing = false; // only use jline if it's in the classpath try { Class consoleC = Class.forName("jline.ConsoleReader"); Class completorC = Class.forName("org.apache.zookeeper.JLineZNodeCompletor"); System.out.println("JLine support is enabled"); Object console = consoleC.getConstructor().newInstance(); Object completor = completorC.getConstructor(ZooKeeper.class).newInstance(zk); Method addCompletor = consoleC.getMethod("addCompletor", Class.forName("jline.Completor")); addCompletor.invoke(console, completor); String line; Method readLine = consoleC.getMethod("readLine", String.class); while ((line = (String)readLine.invoke(console, getPrompt())) != null) { executeLine(line); } } catch (ClassNotFoundException e) { LOG.debug("Unable to start jline", e); jlinemissing = true; } catch (NoSuchMethodException e) { LOG.debug("Unable to start jline", e); jlinemissing = true; } catch (InvocationTargetException e) { LOG.debug("Unable to start jline", e); jlinemissing = true; } catch (IllegalAccessException e) { LOG.debug("Unable to start jline", e); jlinemissing = true; } catch (InstantiationException e) { LOG.debug("Unable to start jline", e); jlinemissing = true; } if (jlinemissing) { System.out.println("JLine support is disabled"); BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); String line; while ((line = br.readLine()) != null) { executeLine(line); } } } } {code} |
380622 | No Perforce job exists for this issue. | 3 | 380901 | 5 years, 50 weeks, 6 days ago | 0|i1tkvj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1896 | Reconfig error messages when upgrading from 3.4.6 to 3.5.0 |
Bug | Open | Major | Unresolved | Unassigned | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 17/Mar/14 12:41 | 05/Feb/20 07:16 | 3.5.0 | 3.7.0, 3.5.8 | server | 0 | 5 | ZOOKEEPER-1810 | When upgrading from 3.4.6 (rc0 actually) to 3.5.0 (trunk as of two weeks ago actually) I got this error message: {noformat} 2014-02-26 22:12:15,446 - ERROR [WorkerReceiver[myid=4]] - Something went wrong while processing config received from 3 {noformat} According to [~fpj]: bq. I think you’re right that the reconfig error is harmless, but we shouldn’t be getting it. The problem is that it is not detecting that we are in backward compatibility mode. We need to fix it for 3.5.0 and perhaps ZOOKEEPER-1805 is the right place for doing it. cc: [~shralex] |
380268 | No Perforce job exists for this issue. | 0 | 380551 | 1 year, 17 weeks, 1 day ago | 0|i1tiqn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1895 | update all notice files, copyright, etc... with the new year - 2014 |
Bug | Resolved | Blocker | Fixed | Michi Mutsuzaki | Patrick D. Hunt | Patrick D. Hunt | 16/Mar/14 12:35 | 20/May/14 07:09 | 16/May/14 13:58 | 3.4.7, 3.5.0 | 3.4.7, 3.5.0 | 0 | 3 | From a note on the list: Hi folks! This is a reminder to update the year in the NOTICE files from 2013 (or older) to 2014. From a legal POV this is not that important as some say. But nonetheless it's good to update the year. LieGrue, strub |
380098 | No Perforce job exists for this issue. | 1 | 380382 | 5 years, 44 weeks, 2 days ago | 0|i1thp3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1894 | ObserverTest.testObserver fails consistently |
Bug | Resolved | Major | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 13/Mar/14 22:43 | 27/Mar/14 21:20 | 27/Mar/14 20:59 | 3.5.0 | 3.5.0 | quorum | 0 | 7 | ubuntu 13.10 Server environment:java.version=1.7.0_51 Server environment:java.vendor=Oracle Corporation |
ObserverTest.testObserver fails consistently on my box. It looks like the observer (myid:3) calls QuorumPeer.getQuorumVerifier() in a tight loop, and the leader (myid:2) is not getting enough CPU time to synchronize with the follower and the observer. The test passes if I increase ClientBase.CONNECTION_TIMEOUT from 30 seconds to 120 seconds. I'll attach a log file. | 379750 | No Perforce job exists for this issue. | 4 | 380035 | 5 years, 51 weeks, 6 days ago | 0|i1tfkf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1893 | automake: use serial-tests option |
Bug | Resolved | Minor | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 12/Mar/14 23:46 | 17/Apr/15 14:11 | 14/Mar/15 21:10 | 3.5.1, 3.6.0 | c client | 0 | 8 | ZOOKEEPER-2138, ZOOKEEPER-1707 | automake switched to run tests in parallel by default in 1.13, but zktest-st and zktest-mt can't run in parallel. We can use the serial-tests option to run tests serially but this option was introduced in automake 1.12. I don't know which version of automake buidbot has. I'll upload the patch and see. | 379499 | No Perforce job exists for this issue. | 1 | 379790 | 4 years, 48 weeks, 6 days ago | 0|i1te1z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1892 | addrvec_next gets called twice when failing over to the next server |
Bug | Resolved | Major | Duplicate | Unassigned | Michi Mutsuzaki | Michi Mutsuzaki | 12/Mar/14 22:06 | 12/Mar/14 23:41 | 12/Mar/14 23:41 | 3.5.0 | c client | 0 | 3 | zookeeper_interest() already calls zoo_cycle_next_server() when the socket is set to -1, so we shouldn't call addrvec_next in handle_error. This causes the next server to get skipped. Zookeeper_simpleSystem::testFirstServerDown fails unless the client gets connected to the server during the first round because the client keeps skipping the second server after the first round. | 379491 | No Perforce job exists for this issue. | 1 | 379782 | 6 years, 2 weeks ago | 0|i1te07: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1891 | StaticHostProviderTest.testUpdateLoadBalancing times out |
Bug | Resolved | Major | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 10/Mar/14 19:33 | 20/May/14 07:09 | 10/May/14 06:46 | 3.5.0 | 3.5.0 | java client | 0 | 5 | ubuntu 13.10 Server environment:java.version=1.7.0_51 Server environment:java.vendor=Oracle Corporation |
StaticHostProviderTest.testUpdateLoadBalancing is consistently timing out on my box. I'll attach a log file. | 378246 | No Perforce job exists for this issue. | 3 | 378538 | 5 years, 44 weeks, 2 days ago | 0|i1t6cf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1890 | unclosed FileOutputStream in FileTxnLog.rollLog |
Bug | Open | Major | Unresolved | Unassigned | Andrew Gaul | Andrew Gaul | 27/Feb/14 20:21 | 14/Feb/15 00:13 | 0 | 4 | When calling rollLog, FileTxnLog flushes but does not close its FileOutputStream, leaking a file descriptor. | 376312 | No Perforce job exists for this issue. | 1 | 376608 | 5 years, 5 weeks, 5 days ago | 0|i1sugn: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1889 | Implement Top-N sort operator |
Bug | Resolved | Major | Not A Problem | Unassigned | Steven Phillips | Steven Phillips | 25/Feb/14 19:38 | 25/Feb/14 19:52 | 25/Feb/14 19:52 | 0 | 2 | When, for example, doing an order by with a limit, if limit << total, it would be much more efficient to maintain a priority queue instead of sorting the entire data set. In most cases, this will greatly reduce the number of comparisons, since most incoming records will not fall in the Top N, and thus will only require a single comparison operation. Incoming records that are in the Top-N will require at most log N comparisons. This will also allow periodic purging of record batches, reducing memory requirements. |
375742 | No Perforce job exists for this issue. | 0 | 376038 | 6 years, 4 weeks, 1 day ago | 0|i1sqyf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1888 | ZkCli.cmd commands fail with "'java' is not recognized as an internal or external command" |
Bug | Resolved | Major | Fixed | Ivan Mitic | Ivan Mitic | Ivan Mitic | 25/Feb/14 14:07 | 13/Mar/14 22:55 | 13/Mar/14 22:55 | 3.4.5 | 3.4.7, 3.5.0 | 0 | 5 | Windows | This appears to be a bug in ZkCli.cmd as it does not try to locate java using the JAVA_HOME environment variable. Will post the patch soon. |
375661 | No Perforce job exists for this issue. | 3 | 375957 | 6 years, 1 week, 6 days ago | 0|i1sqgf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1887 | C implementation of removeWatches |
New Feature | Resolved | Major | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 23/Feb/14 20:00 | 16/Dec/18 09:50 | 16/Apr/14 02:16 | 3.5.0 | c client | 1 | 8 | ZOOKEEPER-442, ZOOKEEPER-1919, ZOOKEEPER-2611, ZOOKEEPER-1829 | This is equivalent for ZOOKEEPER-442's Java impl. | remove_watches | 375303 | No Perforce job exists for this issue. | 3 | 375599 | 5 years, 49 weeks, 1 day ago | 0|i1so93: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1886 | Exception in Follower.followLeader() where Leader is still running, can make that follower hang in LeaderElection |
Bug | Open | Major | Unresolved | Unassigned | Vinayakumar B | Vinayakumar B | 21/Feb/14 00:35 | 21/Feb/14 00:35 | server | 0 | 2 | SocketTimeoutException in {{Follower#followLeader()}} where the leader is successfully running can make this follower not able to rejoin the quorum. Analysis: 1. SocketTimeoutException in below code, will make follower to stop following (Not process shutdown) and try to participate in leader election again. {code} while (self.isRunning()) { readPacket(qp); processPacket(qp); }{code} 2. At the time of leader election {{FastLeaderElection#logicalclock}} will be incremented at only follower side, and this is more than electionEpoch of the leader. 3. Notification from the Leader will get ignored and from this follower notifications will be continously sent and again ignored. |
374944 | No Perforce job exists for this issue. | 0 | 375243 | 6 years, 4 weeks, 6 days ago | 0|i1sm1z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1885 | Znodes deletable by anyone without having the rights to do so |
Bug | Resolved | Major | Not A Problem | Unassigned | Behar Veliqi | Behar Veliqi | 20/Feb/14 10:55 | 21/Feb/14 04:52 | 21/Feb/14 04:52 | 3.4.5 | 0 | 2 | Ubuntu 12.04 LTS 64-bit | Hi, I'm not really sure if this is a bug or a misunderstanding on my part, but I have the problem that, when I create a znode with an ACL as follows: {noformat} [zk: localhost:2181(CONNECTED) 60] create /anode "somecontent" digest:'user:IAEttLCxci/qWhKN2QJ6u1nrQgw=':cdrwa Created /anode [zk: localhost:2181(CONNECTED) 61] getAcl /anode 'digest,''user:IAEttLCxci/qWhKN2QJ6u1nrQgw=' : cdrwa {noformat} I am not able to read or update the content of the node, as it should be: {noformat} [zk: localhost:2181(CONNECTED) 62] get /anode Authentication is not valid : /anode [zk: localhost:2181(CONNECTED) 63] set /anode "update" Authentication is not valid : /anode {noformat} But everyone without being authenticated can delete the node: {noformat} [zk: localhost:2181(CONNECTED) 64] delete /anode [zk: localhost:2181(CONNECTED) 65] get /anode Node does not exist: /anode {noformat} Is this a bug or is there a way to set the ACL so that only the user having the credentials can delete the znode? |
374756 | No Perforce job exists for this issue. | 0 | 375056 | 6 years, 4 weeks, 6 days ago | 0|i1skwf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1884 | zkCli silently ignores commands with missing parameters |
Bug | Open | Minor | Unresolved | Raúl Gutiérrez Segalés | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 20/Feb/14 06:58 | 22/Jun/18 00:49 | 3.4.6, 3.4.11 | 0 | 4 | Apparently, we have fixed this in trunk, but not in the 3.4 branch. When we pass only the path to create, the command is not executed because it expects an additional parameter and there is no error message because the create command exists. | 374697 | No Perforce job exists for this issue. | 1 | 374997 | 3 years, 30 weeks, 1 day ago | 0|i1skjb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1883 | C client unit test failures |
Bug | Resolved | Minor | Fixed | Raúl Gutiérrez Segalés | Abhiraj Butala | Abhiraj Butala | 17/Feb/14 22:30 | 13/Mar/14 18:08 | 13/Mar/14 16:58 | 3.5.0 | c client, tests | 0 | 5 | I am seeing unit test failure for c client after I do 'make check' as shown below. The failure is pretty consistent, but does not happen always. This is on latest check-out of zookeeper trunk. ------------------ Zookeeper_simpleSystem::testAsyncWatcherAutoReset ZooKeeper server started : elapsed 9640 : OK Zookeeper_simpleSystem::testDeserializeString : elapsed 0 : OK Zookeeper_simpleSystem::testFirstServerDown : elapsed 1007 : OK Zookeeper_simpleSystem::testNullData : elapsed 1028 : OK Zookeeper_simpleSystem::testIPV6 : elapsed 1008 : OK Zookeeper_simpleSystem::testCreate : elapsed 1016 : OK Zookeeper_simpleSystem::testPath : elapsed 1083 : OK Zookeeper_simpleSystem::testPathValidation : elapsed 1046 : OK Zookeeper_simpleSystem::testPing : elapsed 17301 : OK Zookeeper_simpleSystem::testAcl : elapsed 1018 : OK Zookeeper_simpleSystem::testChroot : elapsed 3057 : OK Zookeeper_simpleSystem::testAuth ZooKeeper server started ZooKeeper server started : elapsed 29357 : OK Zookeeper_simpleSystem::testHangingClient : elapsed 1037 : OK Zookeeper_simpleSystem::testWatcherAutoResetWithGlobal ZooKeeper server started ZooKeeper server started ZooKeeper server started : elapsed 12983 : OK Zookeeper_simpleSystem::testWatcherAutoResetWithLocal ZooKeeper server started ZooKeeper server started ZooKeeper server started : elapsed 13028 : OK Zookeeper_simpleSystem::testGetChildren2 : elapsed 1031 : OK Zookeeper_simpleSystem::testLastZxid : assertion : elapsed 2514 Zookeeper_watchers::testDefaultSessionWatcher1 : elapsed 52 : OK Zookeeper_watchers::testDefaultSessionWatcher2 : elapsed 3 : OK Zookeeper_watchers::testObjectSessionWatcher1 : elapsed 52 : OK Zookeeper_watchers::testObjectSessionWatcher2 : elapsed 54 : OK Zookeeper_watchers::testNodeWatcher1 : elapsed 55 : OK Zookeeper_watchers::testChildWatcher1 : elapsed 3 : OK Zookeeper_watchers::testChildWatcher2 : elapsed 3 : OK tests/TestClient.cc:1281: Assertion: equality assertion failed [Expected: 1239, Actual : 1238] Failures !!! Run: 70 Failure total: 1 Failures: 1 Errors: 0 FAIL: zktest-mt ========================================== 1 of 2 tests failed Please report to user@zookeeper.apache.org ========================================== make[1]: *** [check-TESTS] Error 1 make[1]: Leaving directory `/home/abutala/work/zk/zookeeper-trunk/src/c' make: *** [check-am] Error 2 ------------------ $ uname -a Linux abutala-vBox 3.8.0-35-generic #50~precise1-Ubuntu SMP Wed Dec 4 17:25:51 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux |
374131 | No Perforce job exists for this issue. | 1 | 374431 | 6 years, 2 weeks ago | 0|i1sh1j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1881 | Shutdown server immediately upon PrivilegedActionException |
Improvement | Patch Available | Major | Unresolved | Ding Yuan | Ding Yuan | Ding Yuan | 13/Feb/14 18:53 | 05/Feb/20 07:11 | 3.4.5 | 3.7.0, 3.5.8 | server | 0 | 3 | It seems when an SaslServer cannot be created due to a PriviledgedActionException, it is better to shutdown the server immediately instead of letting it to propagate. The current behaviour will just set ServerCncx.zooKeeperSaslServer to null, and later every time when an SASL request comes in it will be rejected. If we already detect the loophole early, we should just reject it early. {noformat} private SaslServer createSaslServer(final Login login) { catch (PrivilegedActionException e) { // TODO: exit server at this point(?) LOG.error("Zookeeper Quorum member experienced a PrivilegedActionException exception while creating a SaslServer using a JAAS principal context:" + e); e.printStackTrace(); } {noformat} For what it is worth, attaching an attempt to patch it. The idea of the patch is to propagate this PrivilegedActionException to ServerCnxnFactory and shut down all the connections and server. Not sure if this is the right way to solve it. Any comments are appreciated! Also in the patch are two additional logging on two unlogged exceptions. |
373619 | No Perforce job exists for this issue. | 1 | 373919 | 3 years, 39 weeks, 2 days ago | 0|i1sdw7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1879 | improve the correctness checking of txn log replay |
Improvement | Open | Major | Unresolved | Unassigned | Patrick D. Hunt | Patrick D. Hunt | 10/Feb/14 15:46 | 05/Feb/20 07:16 | 3.4.6, 3.5.0 | 3.7.0, 3.5.8 | server | 0 | 2 | ZOOKEEPER-1573 | In ZOOKEEPER-1573 we decided to fix an issue by relaxing some of the checking. Specifically when the sequence of txns is as follows: * zxid 1: create /prefix/a * zxid 2: create /prefix/a/b * zxid 3: delete /prefix/a/b * zxid 4: delete /prefix/a the log may fail to replay. We addressed this by relaxing a check, which is essentially invalid for this case, but is important in finding corruptions of the datastore. We should add this check back with proper validation of correctness. |
372858 | No Perforce job exists for this issue. | 0 | 373162 | 6 years, 6 weeks, 2 days ago | 0|i1s98n: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1878 | Inconsistent behavior in autocreation of dataDir and dataLogDir |
Bug | Resolved | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 08/Feb/14 23:18 | 27/Mar/14 07:10 | 26/Mar/14 22:02 | 3.4.5 | 3.4.7, 3.5.0 | quorum | 0 | 5 | During the startup if dataDir is not exists server will auto create this. But when user specifies different dataLogDir location which doesn't exists the server will validate and startup will fail. {code} org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing build\test3085582797504170966.junit.dir\zoo.cfg at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:123) at org.apache.zookeeper.server.ServerConfig.parse(ServerConfig.java:79) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:81) at org.apache.zookeeper.server.ZooKeeperServerMainTest$MainThread.run(ZooKeeperServerMainTest.java:92) Caused by: java.lang.IllegalArgumentException: dataLogDir build/test3085582797504170966.junit.dir/data_txnlog is missing. at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:253) at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:119) ... 3 more {code} |
372647 | No Perforce job exists for this issue. | 5 | 372951 | 6 years ago | 0|i1s7y7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1877 | Malformed ACL Id can crash server with skipACL=yes |
Bug | Resolved | Critical | Fixed | Chris Chen | Chris Chen | Chris Chen | 06/Feb/14 18:13 | 25/Jul/14 07:25 | 24/Jul/14 19:32 | 3.5.0 | 3.5.0 | server | 0 | 4 | Because of the way fixupACL is written in PrepRequestProcessor, a request that feeds in an ACL with null members in the Id will cause a server with skipACL=yes to crash. A patch will be provided that re-introduces checks for well-formed ACLs even if skipACL is enabled. |
372277 | No Perforce job exists for this issue. | 1 | 372581 | 5 years, 34 weeks, 6 days ago |
Reviewed
|
0|i1s5of: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1876 | Add support for installing windows services in .cmd scripts |
Improvement | Open | Major | Unresolved | Enis Soztutar | Enis Soztutar | Enis Soztutar | 06/Feb/14 16:14 | 05/Feb/20 07:17 | 3.7.0, 3.5.8 | scripts | 0 | 2 | On windows, daemons can be installed as windows services during installation, so that they can be managed using the standard "service" commands and UI. For this, we have to generate an XML file describing the command line program, and arguments. We can add support for --service parameter passed to bin/zkServer.cmd so that it will output the XML for the service instead of running the command. Hadoop and HBase has the same syntax and mechanics (see https://github.com/apache/hbase/blob/trunk/bin/hbase.cmd#L73) |
372246 | No Perforce job exists for this issue. | 1 | 372550 | 6 years, 7 weeks ago | 0|i1s5hj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1875 | NullPointerException in ClientCnxn$EventThread.processEvent |
Bug | Open | Minor | Unresolved | Jerry He | Jerry He | Jerry He | 02/Feb/14 23:06 | 05/Feb/20 07:16 | 3.4.5, 3.4.10 | 3.7.0, 3.5.8 | java client | 3 | 11 | We've been seeing NullPointerException while working on HBase: {code} 14/01/30 22:15:25 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/biadmin/hbase-trunk 14/01/30 22:15:25 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=hdtest009:2181 sessionTimeout=90000 watcher=null 14/01/30 22:15:25 INFO zookeeper.ClientCnxn: Opening socket connection to server hdtest009/9.30.194.18:2181. Will not attempt to authenticate using SASL (Unable to locate a login configuration) 14/01/30 22:15:25 INFO zookeeper.ClientCnxn: Socket connection established to hdtest009/9.30.194.18:2181, initiating session 14/01/30 22:15:25 INFO zookeeper.ClientCnxn: Session establishment complete on server hdtest009/9.30.194.18:2181, sessionid = 0x143986213e67e48, negotiated timeout = 60000 14/01/30 22:15:25 ERROR zookeeper.ClientCnxn: Error while calling watcher java.lang.NullPointerException at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495) {code} The reason is the watcher is null in this part of the code: {code} private void processEvent(Object event) { try { if (event instanceof WatcherSetEventPair) { // each watcher will process the event WatcherSetEventPair pair = (WatcherSetEventPair) event; for (Watcher watcher : pair.watchers) { try { watcher.process(pair.event); } catch (Throwable t) { LOG.error("Error while calling watcher ", t); } } {code} |
371363 | No Perforce job exists for this issue. | 3 | 371666 | 11 weeks ago | 0|i1s02v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1874 | ZOOKEEPER-1833 Add proper teardown/cleanups in ReconfigTest to shutdown quorumpeer |
Sub-task | Resolved | Major | Fixed | Unassigned | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 01/Feb/14 23:58 | 11/Feb/14 21:29 | 11/Feb/14 21:29 | 3.5.0 | tests | 0 | 4 | This jira to provide proper cleanups in ReconfigTest test cases. | 371290 | No Perforce job exists for this issue. | 2 | 371593 | 6 years, 6 weeks, 1 day ago | 0|i1rzmn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1873 | ZOOKEEPER-1833 Unnecessarily InstanceNotFoundException is coming when unregister failed jmxbeans |
Sub-task | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 29/Jan/14 16:06 | 13/Mar/14 14:16 | 12/Feb/14 06:17 | 3.4.6, 3.5.0 | server | 0 | 4 | MBeanRegistry#register is keeping the beans which are failed to complete the registration. During unregistration time, these failed beans will results in following exception. {code} [junit] 2014-01-29 08:34:56,667 [myid:] - WARN [main:MBeanRegistry@134] - Error during unregister [junit] javax.management.InstanceNotFoundException: org.apache.ZooKeeperService:name0=StandaloneServer_port-1 [junit] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095) [junit] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427) [junit] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415) [junit] at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:536) [junit] at org.apache.zookeeper.jmx.MBeanRegistry.unregister(MBeanRegistry.java:115) {code} |
370731 | No Perforce job exists for this issue. | 1 | 371042 | 6 years, 2 weeks ago | 0|i1rw8n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1872 | ZOOKEEPER-1833 QuorumPeer is not shutdown in few cases |
Sub-task | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 29/Jan/14 15:19 | 21/Jul/16 16:18 | 03/Nov/15 02:26 | 3.4.7, 3.5.2, 3.6.0 | 0 | 7 | ZOOKEEPER-1866 | Few cases are leaving quorumpeer running after the test case execution. Needs proper teardown for these. | test | 370717 | No Perforce job exists for this issue. | 15 | 371028 | 4 years, 20 weeks, 2 days ago | 0|i1rw5r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1871 | Add an option to zkCli to wait for connection before executing commands |
Improvement | Patch Available | Major | Unresolved | Takashi Ohnishi | Vinayakumar B | Vinayakumar B | 29/Jan/14 01:47 | 05/Feb/20 07:11 | 3.4.5 | 3.7.0, 3.5.8 | 0 | 3 | Add an option to zkCli to wait for connection before executing any commands. This is helpful for the execution of inline commands. We are having some scripts to create/delete znodes through commandline. But if getting the connection delays due to one of the node down, then command will fail with connectionloss even though quorum is available. So I propose a commandline option (similar to -server and -timeout) "-waitforconnection" to wait for the connection before executing any commands. |
370559 | No Perforce job exists for this issue. | 4 | 370869 | 3 years, 39 weeks, 2 days ago | Change zkCli.sh so as to wait until connection to quorum is done. Add an option -waitforconnection for cmd args option to set this timeout to be same with session timeout. |
0|i1rv7b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1870 | flakey test in StandaloneDisabledTest.startSingleServerTest |
Bug | Resolved | Blocker | Fixed | Helen Hastings | Patrick D. Hunt | Patrick D. Hunt | 28/Jan/14 13:45 | 28/Aug/14 12:50 | 28/Aug/14 12:49 | 3.4.6 | 3.5.0 | tests | 0 | 9 | ZOOKEEPER-1810, ZOOKEEPER-1805, ZOOKEEPER-1691 | I'm seeing lots of the following failure. Seems like a flakey test (passes every so often). {noformat} junit.framework.AssertionFailedError: client could not connect to reestablished quorum: giving up after 30+ seconds. at org.apache.zookeeper.test.ReconfigTest.testNormalOperation(ReconfigTest.java:143) at org.apache.zookeeper.server.quorum.StandaloneDisabledTest.startSingleServerTest(StandaloneDisabledTest.java:75) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) {noformat} I've found 3 problems: 1. QuorumCnxManager.Listener.run() leaks the socket depending on when the shutdown flag gets set. 2. QuorumCnxManager.halt() doesn't wait for the listener to terminate. 3. QuorumPeer.shuttingDownLE flag doesn't get reset when restarting the leader election. |
370440 | No Perforce job exists for this issue. | 4 | 370761 | 5 years, 39 weeks ago | 0|i1rujj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1869 | zk server falling apart from quorum due to connection loss and couldn't connect back |
Bug | Resolved | Critical | Abandoned | Unassigned | Deepak Jagtap | Deepak Jagtap | 27/Jan/14 14:43 | 03/Mar/16 00:55 | 03/Mar/16 00:55 | 3.5.0 | quorum | 0 | 16 | Using CentOS6 for running these zookeeper servers | We have deployed zookeeper version 3.5.0.1515976, with 3 zk servers in the quorum. The problem we are facing is that one zookeeper server in the quorum falls apart, and never becomes part of the cluster until we restart zookeeper server on that node. Our interpretation from zookeeper logs on all nodes is as follows: (For simplicity assume S1=> zk server1, S2 => zk server2, S3 => zk server 3) Initially S3 is the leader while S1 and S2 are followers. S2 hits 46 sec latency while fsyncing write ahead log and results in loss of connection with S3. S3 in turn prints following error message: Unexpected exception causing shutdown while sock still open java.net.SocketTimeoutException: Read timed out Stack trace ******* GOODBYE /169.254.1.2:47647(S2) ******** S2 in this case closes connection with S3(leader) and shuts down follower with following log messages: Closing connection to leader, exception during packet send java.net.SocketException: Socket close Follower@194] - shutdown called java.lang.Exception: shutdown Follower After this point S3 could never reestablish connection with S2 and leader election mechanism keeps failing. S3 now keeps printing following message repeatedly: Cannot open channel to 2 at election address /169.254.1.2:3888 java.net.ConnectException: Connection refused. While S3 is in this state, S2 repeatedly keeps printing following message: INFO [NIOServerCxnFactory.AcceptThread:/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:60667 Exception causing close of session 0x0: ZooKeeperServer not running Closed socket connection for client /127.0.0.1:60667 (no session established for client) Leader election never completes successfully and causing S2 to fall apart from the quorum. S2 was out of quorum for almost 1 week. While debugging this issue, we found out that both election and peer connection ports on S2 can't be telneted from any of the node (S1, S2, S3). Network connectivity is not the issue. Later, we restarted the ZK server S2 (service zookeeper-server restart) -- now we could telnet to both the ports and S2 joined the ensemble after a leader election attempt. Any idea what might be forcing S2 to get into a situation where it won't accept any connections on the leader election and peer connection ports? |
370195 | No Perforce job exists for this issue. | 0 | 370497 | 4 years, 3 weeks ago | 0|i1rsyf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1868 | ZOOKEEPER-1833 Server not coming back up in QuorumZxidSyncTest |
Sub-task | Resolved | Major | Cannot Reproduce | Unassigned | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 27/Jan/14 10:08 | 24/Aug/15 03:57 | 24/Aug/15 03:57 | 3.4.7 | 0 | 5 | We got this stack trace: {noformat} [junit] 2014-01-27 09:14:08,481 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testLateLogs [junit] java.lang.AssertionError: waiting for server up [junit] at org.junit.Assert.fail(Assert.java:91) [junit] at org.junit.Assert.assertTrue(Assert.java:43) [junit] at org.apache.zookeeper.test.QuorumBase.startServers(QuorumBase.java:188) [junit] at org.apache.zookeeper.test.QuorumBase.startServers(QuorumBase.java:113) [junit] at org.apache.zookeeper.test.QuorumZxidSyncTest.testLateLogs(QuorumZxidSyncTest.java:116) {noformat} which occurs here, when we stop the servers and restart them. {noformat} qb.shutdownServers(); qb.startServers(); {noformat} |
370136 | No Perforce job exists for this issue. | 1 | 370438 | 4 years, 30 weeks, 3 days ago | 0|i1rslb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1867 | ZOOKEEPER-1833 Bug in ZkDatabaseCorruptionTest |
Sub-task | Closed | Major | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 26/Jan/14 21:15 | 13/Mar/14 14:17 | 27/Jan/14 08:39 | 3.4.6, 3.5.0 | tests | 0 | 3 | If I'm reading the test case testCorruption right, it seems to depend on server 5 being elected, but if it is not the case, then it fails waiting for a server to be up. | 370050 | No Perforce job exists for this issue. | 2 | 370352 | 6 years, 2 weeks ago | 0|i1rs27: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1866 | ZOOKEEPER-1833 ClientBase#createClient is failing frequently |
Sub-task | Resolved | Major | Not A Problem | Germán Blanco | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 22/Jan/14 06:09 | 19/May/15 04:23 | 19/May/15 04:23 | 3.4.5 | 3.4.7 | tests | 1 | 4 | ZOOKEEPER-1872 | Following failure pattern has been observed many times in windows build. After creating the zookeeper client, the respective connection bean is not available in the jmx beans and is failing the tests. {code} [junit] 2014-01-22 08:58:22,625 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testInvalidVersion [junit] junit.framework.AssertionFailedError: expected [0x143b92b03330000] expected:<1> but was:<0> [junit] at junit.framework.Assert.fail(Assert.java:47) [junit] at junit.framework.Assert.failNotEquals(Assert.java:283) [junit] at junit.framework.Assert.assertEquals(Assert.java:64) [junit] at junit.framework.Assert.assertEquals(Assert.java:195) [junit] at org.apache.zookeeper.test.JMXEnv.ensureAll(JMXEnv.java:124) [junit] at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:191) [junit] at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:171) [junit] at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:156) [junit] at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:149) [junit] at org.apache.zookeeper.test.MultiTransactionTest.setUp(MultiTransactionTest.java:60) {code} |
369198 | No Perforce job exists for this issue. | 1 | 369503 | 4 years, 44 weeks, 2 days ago | 0|i1rmuf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1865 | Fix retry logic in Learner.connectToLeader() |
Bug | Reopened | Major | Unresolved | Edward Carter | Thawan Kooburat | Thawan Kooburat | 21/Jan/14 19:51 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | server | 0 | 10 | We discovered a long leader election time today in one of our prod ensemble. Here is the description of the event. Before the old leader goes down, it is able to announce notification message. So 3 out 5 (including the old leader) elected the old leader to be a new leader for the next epoch. While, the old leader is being rebooted, 2 other machines are trying to connect to the old leader. So the quorum couldn't form until those 2 machines give up and move to the next round of leader election. This is because Learner.connectToLeader() use a simple retry logic. The contract for this method is that it should never spend longer that initLimit trying to connect to the leader. In our outage, each sock.connect() is probably blocked for initLimit and it is called 5 times. |
369113 | No Perforce job exists for this issue. | 3 | 369418 | 1 year, 17 weeks, 1 day ago | 0|i1rmbr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1864 | quorumVerifier is null when creating a QuorumPeerConfig from parsing a Properties object |
Bug | Resolved | Major | Fixed | Michi Mutsuzaki | some one | some one | 20/Jan/14 01:09 | 20/May/14 07:09 | 17/May/14 07:30 | 3.5.0 | server | 0 | 6 | This bug was found when using ZK 3.5.0 with curator-test 2.3.0. curator-test is building a QuorumPeerConfig from a Properties object and then when we try to run the quorum peer using that configuration, we get an NPE: {noformat} 2014-01-19 21:58:39,768 [myid:] - ERROR [Thread-3:TestingZooKeeperServer$1@138] - From testing server (random state: false) java.lang.NullPointerException at org.apache.zookeeper.server.quorum.QuorumPeer.setQuorumVerifier(QuorumPeer.java:1320) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:156) at org.apache.curator.test.TestingZooKeeperServer$1.run(TestingZooKeeperServer.java:134) at java.lang.Thread.run(Thread.java:722) {noformat} The reason that this happens is because QuorumPeerConfig:parseProperties only peforms a subset of what 'QuorumPeerConfig:parse(String path)' does. The exact additional task performed that we need in parseProperties is the dynamic config backwards compatibility check: {noformat} // backward compatibility - dynamic configuration in the same file as static configuration params // see writeDynamicConfig() - we change the config file to new format if reconfig happens if (dynamicConfigFileStr == null) { configBackwardCompatibilityMode = true; configFileStr = path;................ parseDynamicConfig(cfg, electionAlg, true); checkValidity();................ } {noformat} |
368737 | No Perforce job exists for this issue. | 2 | 369041 | 5 years, 44 weeks, 2 days ago | 0|i1rk07: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1863 | Race condition in commit processor leading to out of order request completion, xid mismatch on client. |
Bug | Resolved | Blocker | Fixed | Dutch T. Meyer | Dutch T. Meyer | Dutch T. Meyer | 15/Jan/14 13:23 | 01/Aug/16 18:27 | 15/Jul/14 18:40 | 3.5.0 | 3.5.0 | server | 1 | 13 | ZOOKEEPER-2151 | In CommitProcessor.java processor, if we are at the primary request handler on line 167: {noformat} while (!stopped && !isWaitingForCommit() && !isProcessingCommit() && (request = queuedRequests.poll()) != null) { if (needCommit(request)) { nextPending.set(request); } else { sendToNextProcessor(request); } } {noformat} A request can be handled in this block and be quickly processed and completed on another thread. If queuedRequests is empty, we then exit the block. Next, before this thread makes any more progress, we can get 2 more requests, one get_children(say), and a sync placed on queuedRequests for the processor. Then, if we are very unlucky, the sync request can complete and this object's commit() routine is called (from FollowerZookeeperServer), which places the sync request on the previously empty committedRequests queue. At that point, this thread continues. We reach line 182, which is a check on sync requests. {noformat} if (!stopped && !isProcessingRequest() && (request = committedRequests.poll()) != null) { {noformat} Here we are not processing any requests, because the original request has completed. We haven't dequeued either the read or the sync request in this processor. Next, the poll above will pull the sync request off the queue, and in the following block, the sync will get forwarded to the next processor. This is a problem because the read request hasn't been forwarded yet, so requests are now out of order. I've been able to reproduce this bug reliably by injecting a Thread.sleep(5000) between the two blocks above to make the race condition far more likely, then in a client program. {noformat} zoo_aget_children(zh, "/", 0, getchildren_cb, NULL); //Wait long enough for queuedRequests to drain sleep(1); zoo_aget_children(zh, "/", 0, getchildren_cb, &th_ctx[0]); zoo_async(zh, "/", sync_cb, &th_ctx[0]); {noformat} When this bug is triggered, 3 things can happen: 1) Clients will see requests complete out of order and fail on xid mismatches. 2) Kazoo in particular doesn't handle this runtime exception well, and can orphan outstanding requests. 3) I've seen zookeeper servers deadlock, likely because the commit cannot be completed, which can wedge the commit processor. |
368022 | No Perforce job exists for this issue. | 8 | 368329 | 3 years, 33 weeks, 3 days ago | 0|i1rfn3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1862 | ServerCnxnTest.testServerCnxnExpiry is intermittently failing |
Bug | Resolved | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 12/Jan/14 22:50 | 16/Mar/14 13:53 | 14/Mar/14 19:11 | 3.5.0 | 3.5.0 | tests | 0 | 5 | ZOOKEEPER-1650 | ServerCnxnTest#testServerCnxnExpiry test case is failing in the trunk build with the following exception {code} [junit] 2014-01-11 10:13:07,696 [myid:] - INFO [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:11221:NIOServerCnxnFactory$AcceptThread@296] - Accepted socket connection from /127.0.0.1:63930 [junit] 2014-01-11 10:13:09,000 [myid:] - INFO [ConnnectionExpirer:NIOServerCnxn@1006] - Closed socket connection for client /127.0.0.1:63930 (no session established for client) [junit] 2014-01-11 10:13:10,697 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@62] - TEST METHOD FAILED testServerCnxnExpiry [junit] java.net.SocketException: Software caused connection abort: recv failed [junit] at java.net.SocketInputStream.socketRead0(Native Method) [junit] at java.net.SocketInputStream.read(SocketInputStream.java:150) [junit] at java.net.SocketInputStream.read(SocketInputStream.java:121) [junit] at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) [junit] at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) [junit] at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) [junit] at java.io.InputStreamReader.read(InputStreamReader.java:184) [junit] at java.io.BufferedReader.fill(BufferedReader.java:154) [junit] at java.io.BufferedReader.readLine(BufferedReader.java:317) [junit] at java.io.BufferedReader.readLine(BufferedReader.java:382) [junit] at org.apache.zookeeper.test.ServerCnxnTest.send4LetterWord(ServerCnxnTest.java:105) [junit] at org.apache.zookeeper.test.ServerCnxnTest.sendRequest(ServerCnxnTest.java:77) [junit] at org.apache.zookeeper.test.ServerCnxnTest.testServerCnxnExpiry(ServerCnxnTest.java:64) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit] at java.lang.reflect.Method.invoke(Method.java:601) [junit] at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) [junit] at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) [junit] at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) [junit] at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) [junit] at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) [junit] at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) [junit] at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) [junit] at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:52) [junit] at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) [junit] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:69) [junit] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:48) [junit] at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) [junit] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) [junit] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) [junit] at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) [junit] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) [junit] at org.junit.runners.ParentRunner.run(ParentRunner.java:292) [junit] at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) {code} When analyzing the possible cause of the failure is: During connection expiry server will close the socket channel connection. After the socket closure, when the client tries to read a line of text will throw java.net.SocketException. In the failure scenario the testcase has established a socket connection and entering into the sleep. In the meantime the server side expiration would happen and closing the socket channel. Assume after the socket closure the testcase is trying to read the text using the previously established socket and is resulting in SocketException. There is a race between the reading the socket in the client side and socket closure in server side. {code} NIOServerCnxn#closeSock is closing the socket channel. sock.socket().shutdownOutput(); sock.socket().shutdownInput(); sock.socket().close(); sock.close(); {code} |
367513 | No Perforce job exists for this issue. | 2 | 367821 | 6 years, 1 week, 4 days ago | 0|i1rcj3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1861 | ConcurrentHashMap isn't used properly in QuorumCnxManager |
Bug | Resolved | Minor | Fixed | Ted Yu | Ted Yu | Ted Yu | 11/Jan/14 18:50 | 13/Feb/14 22:03 | 13/Feb/14 22:03 | 3.5.0 | 3.5.0 | 0 | 6 | queueSendMap is a ConcurrentHashMap. At line 210: {code} if (!queueSendMap.containsKey(sid)) { queueSendMap.put(sid, new ArrayBlockingQueue<ByteBuffer>( SEND_CAPACITY)); {code} By the time control enters if block, there may be another concurrent put with same sid to the ConcurrentHashMap. putIfAbsent() should be used. Similar issue occurs at line 307 as well. |
367438 | No Perforce job exists for this issue. | 3 | 367747 | 6 years, 5 weeks, 6 days ago | 0|i1rc2n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1860 | Async versions of reconfig don't actually throw KeeperException nor InterruptedException |
Bug | Resolved | Major | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 11/Jan/14 15:54 | 14/Jan/14 06:05 | 13/Jan/14 17:34 | 3.5.0 | 3.5.0 | java client | 0 | 5 | This was caught by [~fournc], the async versions of reconfig in the Java client don't actually throw KeeperException nor InterruptedException. Since this is unreleased code (i.e.: for 3.5.0) I don't think there are issues with changing the API (that is, considering what exceptions are thrown part of the API). | 367427 | No Perforce job exists for this issue. | 1 | 367736 | 6 years, 10 weeks, 2 days ago |
Reviewed
|
0|i1rc07: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1859 | pwriter should be closed in NIOServerCnxn#checkFourLetterWord() |
Bug | Open | Minor | Unresolved | Unassigned | Ted Yu | Ted Yu | 11/Jan/14 13:20 | 12/Jul/18 06:32 | 0 | 4 | {code}
final PrintWriter pwriter = new PrintWriter( new BufferedWriter(new SendBufferWriter())); ... } else if (len == telnetCloseCmd) { cleanupWriterSocket(null); return true; } {code} pwriter should be closed in case of telnetCloseCmd |
367422 | No Perforce job exists for this issue. | 0 | 367731 | 1 year, 36 weeks ago | 0|i1rbz3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1858 | ZOOKEEPER-1833 JMX checks - potential race conditions while stopping and starting server |
Sub-task | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 03/Jan/14 13:55 | 24/Mar/17 13:20 | 24/Jan/14 19:19 | 3.4.6, 3.5.0 | 0 | 5 | ZOOKEEPER-2686 | I've noticed one potential case, where previously created zkclient session immediately reconnected and publishing those beans while starting back the zkserver and affecting zk#startup jmx checks. Say, before stopping the server, there is a zk client session 0x143576544c50000 exists. While starting back the server, there could be possibility of seeing the client sessions in jmx. Following is one such case. Please see below logs which has taken from build https://builds.apache.org/job/ZooKeeper-trunk-WinVS2008_java/642/ {code} [junit] 2014-01-03 09:18:12,809 [myid:] - INFO [main-SendThread(127.0.0.1:11222):ClientCnxn$SendThread@1228] - Session establishment complete on server 127.0.0.1/127.0.0.1:11222, sessionid = 0x143576544c50000, negotiated timeout = 30000 [junit] 2014-01-03 09:18:12,809 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11222:ZooKeeperServer@617] - Established session 0x143576544c50000 with negotiated timeout 30000 for client /127.0.0.1:55377{code} {code} [junit] 2014-01-03 09:18:12,391 [myid:] - INFO [main:JMXEnv@135] - ensureOnly:[] [junit] 2014-01-03 09:18:12,395 [myid:] - INFO [main:ClientBase@438] - STARTING server [junit] 2014-01-03 09:18:12,395 [myid:] - INFO [main:ClientBase@359] - CREATING server instance 127.0.0.1:11222 [junit] 2014-01-03 09:18:12,395 [myid:] - INFO [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:11222 [junit] 2014-01-03 09:18:12,395 [myid:] - INFO [main:ClientBase@334] - STARTING server instance 127.0.0.1:11222 [junit] 2014-01-03 09:18:19,030 [myid:] - INFO [main:JMXEnv@142] - unexpected:org.apache.ZooKeeperService:name0=StandaloneServer_port-1,name1=Connections,name2=127.0.0.1,name3=0x143576544c50000 [junit] 2014-01-03 09:18:19,030 [myid:] - INFO [main:JMXEnv@142] - unexpected:org.apache.ZooKeeperService:name0=StandaloneServer_port-1 [junit] 2014-01-03 09:18:19,030 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@62] - TEST METHOD FAILED testDefaultWatcherAutoResetWithChroot [junit] junit.framework.AssertionFailedError: expected:<0> but was:<2> [junit] at junit.framework.Assert.fail(Assert.java:47) [junit] at junit.framework.Assert.failNotEquals(Assert.java:283) [junit] at junit.framework.Assert.assertEquals(Assert.java:64) [junit] at junit.framework.Assert.assertEquals(Assert.java:195) [junit] at junit.framework.Assert.assertEquals(Assert.java:201) [junit] at org.apache.zookeeper.test.JMXEnv.ensureOnly(JMXEnv.java:144) [junit] at org.apache.zookeeper.test.ClientBase.startServer(ClientBase.java:443) [junit] at org.apache.zookeeper.test.DisconnectedWatcherTest.testDefaultWatcherAutoResetWithChroot(DisconnectedWatcherTest.java:123) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} |
test | 366221 | No Perforce job exists for this issue. | 8 | 366532 | 6 years, 2 weeks ago | 0|i1r4l3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1857 | ZOOKEEPER-1833 PrepRequestProcessotTest doesn't shutdown ZooKeeper server |
Sub-task | Closed | Major | Fixed | Germán Blanco | Germán Blanco | Germán Blanco | 03/Jan/14 01:57 | 13/Mar/14 14:17 | 09/Jan/14 18:04 | 3.4.5, 3.5.0 | 3.4.6, 3.5.0 | tests | 0 | 5 | 366126 | No Perforce job exists for this issue. | 4 | 366437 | 6 years, 2 weeks ago | 0|i1r3zz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1856 | zookeeper C-client can fail to switch from a dead server in a 3+ server ensemble if the client only has a 2 server list. |
Bug | Patch Available | Major | Unresolved | Michi Mutsuzaki | Dutch T. Meyer | Dutch T. Meyer | 02/Jan/14 18:43 | 05/Feb/20 07:11 | 3.7.0, 3.5.8 | c client | 0 | 5 | ZOOKEEPER-2466 | If a client has a 2 server list, and is currently connected to the last server in that list, and that server then goes offline, the addrvec_next() call handle_error() will push the client to the start of the list and terminate the connection. Then, the zoo_cycle_next_server() call in zookeeper_interest will be called in response to the connection failure, and the client will cycle back to the failed server. In this way, a client who has a list of only 2 servers can get stuck on the one failed server. This would only be an issue in an ensemble larger than 2 of course, because failing 1 out of 2 would lead to quorum loss anyway. There are other harmonics possible if every other server in the list is failed, but this is simplest to reproduce in a 3 server ensemble where the client only knows about 2 servers, one of which then fails. There are probably some elegant fixes here, but I think the simplest is to add a flag to track whether a server has been accessed before, and if it hasn't, don't call zoo_cycle_next_server() at the top of the zookeeper_interest() function. |
366070 | No Perforce job exists for this issue. | 1 | 366381 | 1 year, 45 weeks ago | 0|i1r3nr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1855 | calls to zoo_set_server() fail to flush outstanding request queue. |
Bug | Open | Minor | Unresolved | Unassigned | Dutch T. Meyer | Dutch T. Meyer | 02/Jan/14 18:07 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | c client | 0 | 3 | If one calls zoo_set_servers to update with a new server list that does not contain the currently connected server, the client will disconnect. Fair enough, but any outstanding requests on the set_requests queue aren't completed, so the next completed request from the new server can fail with an out-of-order XID error. The disconnect occurs in update_addrs(), when a reconfig is necessary, though it's not quite as easy as just calling cleanup_bufs there, because you could then race the call to dequeue_completion in zookeeper_process and pull NULL entries for a recently completed request I don't have a patch for this right now, but I do have a simple repro I can post when time permits. |
366063 | No Perforce job exists for this issue. | 0 | 366374 | 6 years, 11 weeks, 3 days ago | 0|i1r3m7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1854 | ZOOKEEPER-1833 ClientBase ZooKeeper server clean-up |
Sub-task | Closed | Major | Invalid | Germán Blanco | Germán Blanco | Germán Blanco | 30/Dec/13 04:47 | 13/Mar/14 14:17 | 30/Dec/13 06:46 | 3.4.5, 3.5.0 | 3.4.6, 3.5.0 | tests | 0 | 3 | Windows 7, Java 1.7 | The ClientBase utility for tests provides methods for creating a ZooKeeper server, however the close up methods don't seem to shutdown that ZooKeeper server. | 365727 | No Perforce job exists for this issue. | 2 | 366034 | 6 years, 2 weeks ago | 0|i1r1iv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1853 | zkCli.sh can't issue a CREATE command containing spaces in the data |
Bug | Closed | Minor | Fixed | Ryan Lamore | sekine coulibaly | sekine coulibaly | 28/Dec/13 17:38 | 21/Jul/16 16:18 | 08/Nov/15 17:27 | 3.4.6, 3.5.0 | 3.4.7, 3.5.2, 3.6.0 | java client | 3 | 13 | Execute the following command in zkCli.sh : create /contacts/1 {"country":"CA","name":"De La Salle"} The results is that only {"id":1,"fullname":"De is stored. The expected result is to have the full JSON payload stored. The CREATE command seems to be croped after the first space of the data payload. When issuing a create command, all arguments not being -s nor -e shall be treated as the actual data. |
patch | 365640 | No Perforce job exists for this issue. | 5 | 365947 | 4 years, 18 weeks, 3 days ago | Allows spaces to be used for parameters in zkCli as long as they are in single or double quotes. ie: create /node1 "This will now work" | 0|i1r0zj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1852 | ZOOKEEPER-1833 ServerCnxnFactory instance is not properly cleanedup |
Sub-task | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 26/Dec/13 05:20 | 13/Mar/14 14:17 | 02/Jan/14 19:48 | 3.4.6, 3.5.0 | tests | 0 | 6 | ClientBase#createNewServerInstance() - Say the startup of the server fails, this will not initialize 'serverFactory' and will be null. When the flow comes to teardown/shutdown, it will bypass stopping of this server instance due to the following check. This will affect other test case verifications like, jmx check 'JMXEnv#ensureOnly'. ClientBase#shutdownServerInstance {code} static void shutdownServerInstance(ServerCnxnFactory factory, String hostPort) { if (factory != null) { //...shutdown logic } {code} |
365384 | No Perforce job exists for this issue. | 6 | 365686 | 6 years, 2 weeks ago | 0|i1qz9b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1851 | Follower and Observer Request Processors Do Not Forward create2 Requests |
Bug | Resolved | Blocker | Fixed | Chris Chen | Chris Chen | Chris Chen | 23/Dec/13 18:23 | 19/Jul/14 07:24 | 18/Jul/14 13:49 | 3.5.0 | 3.5.0 | quorum | 0 | 7 | ZOOKEEPER-1297, ZOOKEEPER-1147 | Recent changes to the Observer and Follower Request Processors switch on the request opcode, but create2 is left out. This leads to a condition where the create2 request is passed to the CommitProcessor, but the leader never gets the request, the CommitProcessor can't find a matching request, so the client gets disconnected. Added tests as well. |
patch | 365202 | No Perforce job exists for this issue. | 5 | 365507 | 5 years, 35 weeks, 5 days ago | 0|i1qy5j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1850 | cppunit test testNonexistingHost in TestZookeeperInit is failing on Unbuntu |
Bug | Closed | Trivial | Won't Fix | Germán Blanco | Germán Blanco | Germán Blanco | 21/Dec/13 12:52 | 13/Mar/14 14:17 | 22/Dec/13 12:18 | 3.4.6, 3.5.0 | 3.4.6, 3.5.0 | tests | 0 | 3 | Linux ubuntu 3.8.0-29-generic #42~precise1-Ubuntu | This is the error: TestZookeeperInit.cc:241: Assertion: assertion failed [Expression: zh==0] |
364866 | No Perforce job exists for this issue. | 1 | 365166 | 6 years, 2 weeks ago | 0|i1qw1r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1849 | ZOOKEEPER-1833 Need to properly tear down tests in various cases |
Sub-task | Closed | Blocker | Fixed | Germán Blanco | Germán Blanco | Germán Blanco | 17/Dec/13 22:12 | 13/Mar/14 14:16 | 21/Dec/13 16:06 | 3.4.5, 3.5.0 | 3.4.6, 3.5.0 | tests | 0 | 3 | 364377 | No Perforce job exists for this issue. | 3 | 364677 | 6 years, 2 weeks ago | 0|i1qt1j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1848 | [WINDOWS] Java NIO socket channels does not work with Windows ipv6 on JDK6 |
Bug | Open | Major | Unresolved | Enis Soztutar | Enis Soztutar | Enis Soztutar | 17/Dec/13 20:17 | 05/Feb/20 07:17 | 3.7.0, 3.5.8 | 0 | 3 | ZK uses Java NIO to create ServerSorcket's from ServerSocketChannels. Under windows, the ipv4 and ipv6 is implemented independently, and Java seems that it cannot reuse the same socket channel for both ipv4 and ipv6 sockets. We are getting "java.net.SocketException: Address family not supported by protocol family" exceptions. When, ZK client resolves "localhost", it gets both v4 127.0.0.1 and v6 ::1 address, but the socket channel cannot bind to both v4 and v6. The problem is reported as: http://bugs.sun.com/view_bug.do?bug_id=6230761 http://stackoverflow.com/questions/1357091/binding-an-ipv6-server-socket-on-windows Although the JDK bug is reported as resolved, I have tested with jdk1.6.0_33 without any success. Although JDK7 seems to have fixed this problem. See HBASE-6825 for reference. |
364365 | No Perforce job exists for this issue. | 2 | 364665 | 5 years, 50 weeks ago | 0|i1qsyv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1847 | Normalize line endings in repository |
Bug | Resolved | Major | Duplicate | Enis Soztutar | Enis Soztutar | Enis Soztutar | 17/Dec/13 18:05 | 05/Feb/14 16:31 | 05/Feb/14 16:31 | 3.5.0 | 0 | 3 | It is good practice to have all the code in the repository use the same line endings (LF) so that patches can be applied normally. We can add a gitattributes file so that checked out code can still have platform dependent line endings. More readings: https://help.github.com/articles/dealing-with-line-endings http://stackoverflow.com/questions/170961/whats-the-best-crlf-handling-strategy-with-git |
364348 | No Perforce job exists for this issue. | 1 | 364648 | 6 years, 7 weeks, 1 day ago | 0|i1qsv3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1846 | Cached InetSocketAddresses prevent proper dynamic DNS resolution |
Bug | Resolved | Minor | Duplicate | Unassigned | Benjamin Jaton | Benjamin Jaton | 16/Dec/13 20:18 | 30/Jan/19 07:50 | 16/Jan/17 01:40 | 3.4.6 | quorum | 1 | 12 | 0 | 600 | ZOOKEEPER-1506 | The class QuorumPeer maintains a Map<Long, QuorumServer> quorumPeers. Each QuorumServer is created with an instance of InetSocketAddress electionAddr, and holds it forever. I believe this is why the ZooKeeper servers can't resolve each other dynamically: If a ZooKeeper in the ensemble cannot be resolved at startup, it will never be resolved (until restart of the JVM), constantly failing with an UnknownHostException, even when the node is back up and reachable. I would suggest to recreate an InetSocketAddress every time we retry the connection. |
100% | 100% | 600 | 0 | patch, pull-request-available | 364152 | No Perforce job exists for this issue. | 4 | 364452 | 3 years, 9 weeks, 3 days ago | Forces the re-resolve, on error, of the Peers' Hostname to IP address, which is an issue in virtual/cloud environments where IPs are assigned dynamically upon every container startup. If the Hostname is unresolvable or the connection fails (IP change), this DNS refresh process is immediately triggered. |
0|i1qrnr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1845 | FLETest.testLE fails on windows |
Bug | Closed | Major | Duplicate | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 16/Dec/13 19:13 | 13/Mar/14 14:17 | 17/Dec/13 17:35 | 3.4.5 | 3.4.6 | quorum, tests | 0 | 3 | ZOOKEEPER-1733 | windows 7 x64 java 1.7.0_45 | This test waits for the leader election to settle, but it is possible that 3 follower threads join before the leader thread joins. We should wait for the leader thread to join in a loop for some time. {noformat} Leader hasn't joined: 5 junit.framework.AssertionFailedError: Leader hasn't joined: 5 at org.apache.zookeeper.test.FLETest.testLE(FLETest.java:313) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) {noformat} |
364146 | No Perforce job exists for this issue. | 1 | 364446 | 6 years, 2 weeks ago | 0|i1qrmf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1844 | TruncateTest fails on windows |
Bug | Closed | Critical | Fixed | Rakesh Radhakrishnan | Michi Mutsuzaki | Michi Mutsuzaki | 16/Dec/13 18:58 | 13/Mar/14 14:17 | 12/Feb/14 11:01 | 3.4.5, 3.5.0 | 3.4.6, 3.5.0 | server | 0 | 5 | windows | TruncateTest has been failing consistently on windows: https://builds.apache.org/job/ZooKeeper-trunk-WinVS2008_java/627/testReport/junit/org.apache.zookeeper.test/TruncateTest/testTruncate/ |
364141 | No Perforce job exists for this issue. | 7 | 364441 | 6 years, 2 weeks ago | 0|i1qrlb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1843 | Oddity in ByteBufferInputStream skip |
Bug | Resolved | Minor | Fixed | Bill Havanki | Justin SB | Justin SB | 16/Dec/13 18:09 | 28/Apr/14 07:05 | 26/Apr/14 17:17 | 3.5.0 | 0 | 5 | I was reading ByteBufferInputStream.java; here is the skip method: public long skip(long n) throws IOException { long newPos = bb.position() + n; if (newPos > bb.remaining()) { n = bb.remaining(); } bb.position(bb.position() + (int) n); return n; } The first two lines look wrong; we compare a "point" (position) to a "distance" (remaining). I think the test should be if (newPos >= bb.limit()). Or more simply: public long skip(long n) throws IOException { int remaining = buffer.remaining(); if (n > remaining) { n = remaining; } buffer.position(buffer.position() + (int) n); return n; } |
364139 | No Perforce job exists for this issue. | 2 | 364439 | 5 years, 47 weeks, 3 days ago | 0|i1qrkv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1842 | Election listening thread not shutdown for leaders or followers |
Bug | Open | Major | Unresolved | Unassigned | Jeremy Stribling | Jeremy Stribling | 16/Dec/13 15:35 | 16/Dec/13 16:10 | quorum | 0 | 2 | Linux, ZK trunk | When I was testing the patch for https://issues.apache.org/jira/browse/ZOOKEEPER-1691, the test included with that patch was failing for me. The problem happened when the tests shuts down some followers and then attempts to bring them back up: {quote} 2013-12-13 17:31:03,976 [myid:1] - INFO [QuorumPeer[myid=1]/127.0.0.1:11227:Follower@194] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:194) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:971) ... 2013-12-13 17:31:03,992 [myid:1] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@544] - My election bind port: localhost/127.0.0.1:11229 2013-12-13 17:31:03,992 [myid:1] - ERROR [localhost/127.0.0.1:11229:QuorumCnxManager$Listener@557] - Exception while listening java.net.BindException: Address already in use at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376) at java.net.ServerSocket.bind(ServerSocket.java:376) at java.net.ServerSocket.bind(ServerSocket.java:330) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:546) {quote} The problem appears to be that the when follower.shutdown() is called in QuorumPeer.run(), the election algorithm is never shut down, so when the node restarts it can't bind back to the same port. I will upload a patch that calls shutdown() for both the leader and the follower in this case, but I'm not positive its the right place or fix for this issue, so feedback would be appreciated. |
364115 | No Perforce job exists for this issue. | 1 | 364415 | 6 years, 14 weeks, 3 days ago | 0|i1qrfj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1841 | ZOOKEEPER-1833 problem in QuorumTest |
Sub-task | Closed | Major | Fixed | Germán Blanco | Germán Blanco | Germán Blanco | 16/Dec/13 09:32 | 13/Mar/14 14:16 | 18/Dec/13 08:06 | 3.4.5 | 3.4.6 | tests | 0 | 2 | Windows, Java 1.7 | QuorumTest.testNoLogBeforeLeaderEstablishment fails with Assertion: "NOt following" | 364042 | No Perforce job exists for this issue. | 5 | 364342 | 6 years, 2 weeks ago | 0|i1qqzb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1840 | Server tries to connect to itself during dynamic reconfig |
Bug | Resolved | Minor | Fixed | Alexander Shraer | Bruno Freudensprung | Bruno Freudensprung | 14/Dec/13 06:53 | 16/Apr/14 07:07 | 16/Apr/14 03:07 | 3.5.0 | quorum | 0 | 5 | Submitted this bug on a suggestion of Alexander Shraer (see https://issues.apache.org/jira/browse/ZOOKEEPER-1691) How to reproduce: == Server 1 zoo.cfg: standaloneEnabled=false dynamicConfigFile=<path to>/confdyn1/zoo.cfg.dynamic == Server 1 zoo.cfg.dynamic: server.1=localhost:2888:3888:participant;localhost:2181 == Server 2 zoo.cfg: standaloneEnabled=false dynamicConfigFile=<path to>/confdyn2/zoo.cfg.dynamic == Server 2 zoo.cfg.dynamic (it is "aware" of the server 1, as mentioned in the Dynamic Reconfiguration - User Manual that I should have read more carefully yesterday): server.1=localhost:2888:3888:participant;localhost:2181 server.2=localhost:2889:3889:participant;localhost:2182 Start server 1 Start server 2 == use client 1 to issue a reconfig command on server 1: [zk: localhost:2181(CONNECTED) 1] reconfig -add server.2=localhost:2889:3889:participant;localhost:2182 Committed new configuration: server.1=localhost:2888:3888:participant;localhost:2181 server.2=localhost:2889:3889:participant;localhost:2182 version=100000003 There are strange stack traces in both server consoles. Server 1: 2013-12-12 22:31:40,888 [myid:1] - WARN [ProcessThread(sid:1 cport:-1)::QuorumCnxManager@390] - Cannot open channel to 2 at election address localhost/127.0.0.1:3889 java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:375) at org.apache.zookeeper.server.quorum.QuorumPeer.connectNewPeers(QuorumPeer.java:1252) at org.apache.zookeeper.server.quorum.QuorumPeer.setLastSeenQuorumVerifier(QuorumPeer.java:1272) at org.apache.zookeeper.server.quorum.Leader.propose(Leader.java:1071) at org.apache.zookeeper.server.quorum.ProposalRequestProcessor.processRequest(ProposalRequestProcessor.java:78) at org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProcessor.java:864) at org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:144) 2013-12-12 22:31:41,919 [myid:1] - WARN [LearnerHandler-/127.0.0.1:52301:QuorumPeer@1259] - Restarting Leader Election 2013-12-12 22:31:41,920 [myid:1] - INFO [localhost/127.0.0.1:3888:QuorumCnxManager$Listener@571] - Leaving listener 2013-12-12 22:31:41,920 [myid:1] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@544] - My election bind port: localhost/127.0.0.1:3888 2013-12-12 22:31:44,438 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection$Messenger$WorkerReceiver@410] - WorkerReceiver is down 2013-12-12 22:31:44,439 [myid:1] - INFO [WorkerSender[myid=1]:FastLeaderElection$Messenger$WorkerSender@442] - WorkerSender is down Server 2: 2013-12-12 22:31:41,894 [myid:2] - WARN [QuorumPeer[myid=2]/127.0.0.1:2182:QuorumCnxManager@390] - Cannot open channel to 2 at election address localhost/127.0.0.1:3889 java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:375) at org.apache.zookeeper.server.quorum.QuorumPeer.connectNewPeers(QuorumPeer.java:1252) at org.apache.zookeeper.server.quorum.QuorumPeer.setLastSeenQuorumVerifier(QuorumPeer.java:1272) at org.apache.zookeeper.server.quorum.Follower.processPacket(Follower.java:131) at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:89) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:967) 2013-12-12 22:31:41,923 [myid:2] - WARN [QuorumPeer[myid=2]/127.0.0.1:2182:QuorumPeer@1259] - Restarting Leader Election 2013-12-12 22:31:41,924 [myid:2] - INFO [QuorumPeerListener:QuorumCnxManager$Listener@544] - My election bind port: localhost/127.0.0.1:3889 |
363737 | No Perforce job exists for this issue. | 1 | 364043 | 5 years, 49 weeks, 1 day ago | 0|i1qp53: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1839 | Deadlock in NettyServerCnxn |
Bug | Closed | Critical | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 13/Dec/13 09:25 | 13/Mar/14 14:16 | 15/Dec/13 23:21 | 3.4.5 | 3.4.6, 3.5.0 | server | 0 | 7 | ZOOKEEPER-1179 | Deadlock found during NettyServerCnxn closure. Please see the attached threaddump. | 363540 | No Perforce job exists for this issue. | 3 | 363846 | 6 years, 2 weeks ago | 0|i1qnxj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1838 | ZOOKEEPER-1833 ZooKeeper shutdown hangs indefinitely at NioServerSocketChannelFactory.releaseExternalResources |
Sub-task | Closed | Major | Duplicate | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 13/Dec/13 07:18 | 13/Mar/14 14:17 | 13/Dec/13 14:24 | 3.4.6 | server | 0 | 5 | ZOOKEEPER-1715 | Zookeeper shutdown hangs when releasing external resources. This has been observed when executing NioNettySuiteTest. {code} "main" prio=6 tid=0x01498400 nid=0x2328 waiting on condition [0x0158e000..0x0158fe28] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x22f58918> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963) at java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1244) at org.jboss.netty.util.internal.ExecutorUtil.terminate(ExecutorUtil.java:87) at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.releaseExternalResources(NioServerSocketChannelFactory.java:146) at org.jboss.netty.bootstrap.Bootstrap.releaseExternalResources(Bootstrap.java:324) at org.apache.zookeeper.server.NettyServerCnxnFactory.shutdown(NettyServerCnxnFactory.java:345) at org.apache.zookeeper.test.ClientBase.shutdownServerInstance(ClientBase.java:355) at org.apache.zookeeper.test.ClientBase.stopServer(ClientBase.java:422) {code} |
363530 | No Perforce job exists for this issue. | 2 | 363836 | 6 years, 2 weeks ago | 0|i1qnvb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1837 | ZOOKEEPER-1833 Fix JMXEnv checks (potential race conditions) |
Sub-task | Closed | Major | Fixed | Germán Blanco | Germán Blanco | Germán Blanco | 13/Dec/13 02:40 | 13/Mar/14 14:16 | 16/Jan/14 18:01 | 3.4.5, 3.5.0 | 3.4.6, 3.5.0 | tests | 0 | 5 | Windows 8 | The following failures in ZooKeeper-3.4-WinVS2008_java and ZooKeeper-trunk-WinVS2008_java require fixing: [junit] junit.framework.AssertionFailedError: expected:<0> but was:<1> [junit] at junit.framework.Assert.fail(Assert.java:47) [junit] at junit.framework.Assert.failNotEquals(Assert.java:283) [junit] at junit.framework.Assert.assertEquals(Assert.java:64) [junit] at junit.framework.Assert.assertEquals(Assert.java:195) [junit] at junit.framework.Assert.assertEquals(Assert.java:201) [junit] at org.apache.zookeeper.test.JMXEnv.ensureOnly(JMXEnv.java:138) [junit] at org.apache.zookeeper.test.ClientBase.startServer(ClientBase.java:417) [junit] at org.apache.zookeeper.test.ZooKeeperQuotaTest.testQuota(ZooKeeperQuotaTest.java:80) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke(Method.java:597) [junit] at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) [junit] at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) [junit] at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) [junit] at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) [junit] junit.framework.AssertionFailedError: expected:<0> but was:<1> [junit] at junit.framework.Assert.fail(Assert.java:47) [junit] at junit.framework.Assert.failNotEquals(Assert.java:283) [junit] at junit.framework.Assert.assertEquals(Assert.java:64) [junit] at junit.framework.Assert.assertEquals(Assert.java:195) [junit] at junit.framework.Assert.assertEquals(Assert.java:201) [junit] at org.apache.zookeeper.test.JMXEnv.ensureOnly(JMXEnv.java:138) [junit] at org.apache.zookeeper.test.ClientBase.startServer(ClientBase.java:417) [junit] at org.apache.zookeeper.test.ZooKeeperQuotaTest.testQuota(ZooKeeperQuotaTest.java:80) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke(Method.java:597) [junit] at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) [junit] at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) [junit] at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) [junit] at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) [junit] at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) [ junit.framework.AssertionFailedError: expected [0x142e5f027b50001] expected:<1> but was:<0> at org.apache.zookeeper.test.JMXEnv.ensureAll(JMXEnv.java:115) at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:197) at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:171) at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:156) at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:149) at org.apache.zookeeper.ZooKeeperTest.testDeleteRecursive(ZooKeeperTest.java:45) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) |
363493 | No Perforce job exists for this issue. | 12 | 363799 | 6 years, 2 weeks ago | 0|i1qnn3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1836 | addrvec_next() fails to set next parameter if addrvec_hasnext() returns false |
Bug | Resolved | Trivial | Fixed | Dutch T. Meyer | Dutch T. Meyer | Dutch T. Meyer | 12/Dec/13 19:50 | 20/May/14 07:09 | 15/May/14 16:40 | 3.5.0 | c client | 0 | 5 | There is a relatively innocuous but useless pointer assignment in addrvec_next(): 195 void addrvec_next(addrvec_t *avec, struct sockaddr_storage *next) .... 203 if (!addrvec_hasnext(avec)) 204 { 205 next = NULL; 206 return; That assignment on (205) has no point, as next is a local variable lost upon function return. Likely this should be a memset to zero out the actual parameter. |
363459 | No Perforce job exists for this issue. | 2 | 363765 | 5 years, 44 weeks, 2 days ago | 0|i1qnfr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1835 | dynamic configuration file renaming fails on Windows |
Bug | Resolved | Major | Fixed | Bruno Freudensprung | Bruno Freudensprung | Bruno Freudensprung | 12/Dec/13 16:05 | 04/Jul/14 07:11 | 03/Jul/14 22:25 | 3.5.0 | 3.5.0 | quorum | 0 | 7 | Windows 7 64-bit, Oracle Java 1.6.0_32-b05 | On Windows, reconfig fails to rename the tmp dynamic config file to the real dynamic config filename. Javadoc of java.io.File.renameTo says the behavior is highly plateform dependent, so I guess this should not be a big surprise. The problem occurs in src/java/main/org/apache/zookeeper/server/quorum/QuorumPeerConfig.java that could be modified like this: + curFile.delete(); if (!tmpFile.renameTo(curFile)) { + configFile.delete(); if (!tmpFile.renameTo(configFile)) { As suggested by Alex in https://issues.apache.org/jira/browse/ZOOKEEPER-1691 (btw there is more information about my test scenario over there) it is a bit "scary" to delete the current configuration file. |
363424 | No Perforce job exists for this issue. | 6 | 363730 | 5 years, 37 weeks, 6 days ago | Patch described in this comment below: https://issues.apache.org/jira/browse/ZOOKEEPER-1835?focusedCommentId=13848420&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13848420 |
0|i1qn7z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1834 | ZOOKEEPER-1833 Catch IOException in FileTxnLog |
Sub-task | Closed | Major | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 10/Dec/13 17:12 | 13/Mar/14 14:16 | 10/Dec/13 19:55 | 3.4.6, 3.5.0 | 0 | 3 | Upon an IOException in FileTxnLog#next(), the log file open remains open, which causes test cases at least in BufferSizeTest to fail. We need to add a catch block. | 362998 | No Perforce job exists for this issue. | 4 | 363304 | 6 years, 2 weeks ago | 0|i1qklr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1833 | fix windows build |
Bug | Resolved | Blocker | Fixed | Michi Mutsuzaki | Michi Mutsuzaki | Michi Mutsuzaki | 06/Dec/13 20:49 | 03/Nov/15 09:46 | 03/Nov/15 09:46 | 3.4.5 | 3.4.7 | 1 | 7 | ZOOKEEPER-1414, ZOOKEEPER-1459, ZOOKEEPER-1834, ZOOKEEPER-1837, ZOOKEEPER-1838, ZOOKEEPER-1841, ZOOKEEPER-1849, ZOOKEEPER-1852, ZOOKEEPER-1854, ZOOKEEPER-1857, ZOOKEEPER-1858, ZOOKEEPER-1866, ZOOKEEPER-1867, ZOOKEEPER-1868, ZOOKEEPER-1872, ZOOKEEPER-1873, ZOOKEEPER-1874, ZOOKEEPER-1904, ZOOKEEPER-1905, ZOOKEEPER-2047 | ZOOKEEPER-1179 | A bunch of 3.4 tests are failing on windows. {noformat} [junit] 2013-12-06 08:40:59,692 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testEarlyLeaderAbandonment [junit] 2013-12-06 08:41:10,472 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testHighestZxidJoinLate [junit] 2013-12-06 08:45:31,085 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testUpdatingEpoch [junit] 2013-12-06 08:55:34,630 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testObserversHammer [junit] 2013-12-06 08:55:59,889 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncExistsFailure_NoNode [junit] 2013-12-06 08:56:00,571 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetACL [junit] 2013-12-06 08:56:02,626 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetChildrenEmpty [junit] 2013-12-06 08:56:03,491 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetChildrenSingle [junit] 2013-12-06 08:56:11,276 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetChildrenTwo [junit] 2013-12-06 08:56:13,878 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetChildrenFailure_NoNode [junit] 2013-12-06 08:56:16,294 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetChildren2Empty [junit] 2013-12-06 08:56:18,622 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetChildren2Single [junit] 2013-12-06 08:56:21,224 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetChildren2Two [junit] 2013-12-06 08:56:23,738 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetChildren2Failure_NoNode [junit] 2013-12-06 08:56:26,058 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetData [junit] 2013-12-06 08:56:28,482 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAsyncGetDataFailure_NoNode [junit] 2013-12-06 08:57:35,527 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testStartupFailureCreate [junit] 2013-12-06 08:57:38,645 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testStartupFailureSet [junit] 2013-12-06 08:57:41,261 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testStartupFailureSnapshot [junit] 2013-12-06 08:59:22,222 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testClientWithWatcherObj [junit] 2013-12-06 09:00:05,592 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testClientCleanup [junit] 2013-12-06 09:01:24,113 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testBindByAddress [junit] 2013-12-06 09:02:14,123 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testClientwithoutWatcherObj [junit] 2013-12-06 09:05:56,461 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testZeroWeightQuorum [junit] 2013-12-06 09:08:18,747 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testResyncByDiffAfterFollowerCrashes [junit] 2013-12-06 09:09:42,271 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testFourLetterWords [junit] 2013-12-06 09:14:03,770 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testLE [junit] 2013-12-06 09:46:30,002 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testHierarchicalQuorum [junit] 2013-12-06 09:50:26,912 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testHammerBasic [junit] 2013-12-06 09:51:07,604 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testQuotaWithQuorum [junit] 2013-12-06 09:52:41,515 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testNull [junit] 2013-12-06 09:53:22,648 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testDeleteWithChildren [junit] 2013-12-06 09:56:49,061 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testClientwithoutWatcherObj [junit] 2013-12-06 09:58:27,705 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testGetView [junit] 2013-12-06 09:59:07,856 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testViewContains [junit] 2013-12-06 10:01:31,418 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testSessionMoved [junit] 2013-12-06 10:04:50,542 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testMultiToFollower [junit] 2013-12-06 10:07:55,361 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testBehindLeader [junit] 2013-12-06 10:10:57,439 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testLateLogs [junit] 2013-12-06 10:12:05,336 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testConnectionEvents [junit] 2013-12-06 10:14:02,781 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testRecovery [junit] 2013-12-06 10:14:37,220 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testFail [junit] 2013-12-06 10:14:46,925 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testRestoreCommittedLog [junit] 2013-12-06 10:15:30,109 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAuth [junit] 2013-12-06 10:16:09,256 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAuth [junit] 2013-12-06 10:16:44,586 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAuth [junit] 2013-12-06 10:17:19,222 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testBadSaslAuthNotifiesWatch [junit] 2013-12-06 10:17:54,239 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAuthFail [junit] 2013-12-06 10:18:35,623 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAuth [junit] 2013-12-06 10:18:47,094 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testAuth [junit] 2013-12-06 10:19:06,770 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testCreateAfterCloseShouldFail [junit] 2013-12-06 10:20:23,884 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testBasic {noformat} |
362495 | No Perforce job exists for this issue. | 5 | 362789 | 4 years, 20 weeks, 2 days ago | 0|i1qhh3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1832 | Add count of connected clients to submitted ganglia metrics |
Bug | Open | Minor | Unresolved | Unassigned | Ben Hartshorne | Ben Hartshorne | 05/Dec/13 13:21 | 06/Dec/13 16:55 | 0 | 1 | The ganglia zookeeper plugin does not report the number of connected clients, though this information is available from the 'stat' command. | 362221 | No Perforce job exists for this issue. | 0 | 362516 | 6 years, 16 weeks ago | 0|i1qfsf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1831 | ZOOKEEPER-1829 Document remove watches details to the guide |
Sub-task | Resolved | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 05/Dec/13 04:20 | 16/Dec/18 09:36 | 03/Apr/14 19:14 | 3.5.0 | documentation | 0 | 4 | This JIRA is for documenting the details of removing the watches | remove_watches | 362120 | No Perforce job exists for this issue. | 2 | 362415 | 5 years, 50 weeks, 6 days ago | 0|i1qf5z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1830 | ZOOKEEPER-1829 Support command line shell for removing watches |
Sub-task | Resolved | Critical | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 04/Dec/13 23:57 | 16/Dec/18 09:36 | 03/Apr/14 19:04 | 3.5.0 | 0 | 4 | This JIRA to discuss the command line shell for removing watches. Makes it easier to do ad-hoc testing. | remove_watches | 362090 | No Perforce job exists for this issue. | 2 | 362385 | 5 years, 50 weeks, 6 days ago | 0|i1qezb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1829 | Umbrella jira for removing watches that are no longer of interest |
New Feature | Resolved | Critical | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 04/Dec/13 23:57 | 16/Dec/18 09:35 | 04/Apr/14 13:52 | 3.5.0 | java client, server | 1 | 3 | ZOOKEEPER-442, ZOOKEEPER-1830, ZOOKEEPER-1831 | ZOOKEEPER-1887 | remove_watches | 362089 | No Perforce job exists for this issue. | 0 | 362384 | 5 years, 50 weeks, 6 days ago | 0|i1qez3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1828 | Audit src/c/src/zookeeper.c for missing error checking & early returns |
Bug | Open | Major | Unresolved | Unassigned | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 03/Dec/13 19:29 | 06/Dec/13 16:34 | c client | 0 | 1 | After discussing the patch for ZOOKEEPER-1632 we came to realize that many methods in the c client don't honor the internal error checking that they do (i.e.: they don't return early on error). This is bad because you might return with an error but your call (in case of the async ones) might still have been called. So at that point it's unclear how the caller should deal with resource deallocation. |
361831 | No Perforce job exists for this issue. | 0 | 362128 | 6 years, 16 weeks, 2 days ago | 0|i1qde7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1827 | Reduce the number of mntr calls in zookeeper ganglia plugin |
Improvement | Open | Major | Unresolved | Unassigned | Nikhil Mulley | Nikhil Mulley | 03/Dec/13 16:08 | 03/Dec/13 16:08 | contrib | 0 | 1 | The zookeeper ganglia plugin script makes number of mntr calls to the zookeeper node for each of the metric, which seems to be an overhead to the zookeeper. I think it could be improvised to make a single mntr call and collect the metric data and send all the metrics information collected from a single mntr command (that is run every 60s interval) to the ganglia. The change and github pull request is at https://github.com/apache/zookeeper/pull/8 Please let me know if there are any other changes required here. thanks, Nikhil |
361775 | No Perforce job exists for this issue. | 0 | 362072 | 6 years, 16 weeks, 2 days ago | 0|i1qd1r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1826 | zookeeper ganglia module fails to read the output of stat when too many clients are connected |
Bug | Open | Minor | Unresolved | Unassigned | Ben Hartshorne | Ben Hartshorne | 03/Dec/13 14:41 | 06/Dec/13 16:34 | 0 | 1 | The ganglia zookeeper module uses a single 2048 byte socket.recv to get the response from the 'stat' command. When there are more than a few clients connected, the list of connected clients fills the buffer before the script gets to the actual metrics its trying to report. This bug is fixed in https://github.com/maplebed/zookeeper-monitoring/tree/ben.fetch_more_data |
361739 | No Perforce job exists for this issue. | 0 | 362036 | 6 years, 16 weeks, 2 days ago | 0|i1qctr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1825 | In ClientBase, if not set "build.test.dir" will throw IOException |
Test | Open | Major | Unresolved | Unassigned | chendihao | chendihao | 02/Dec/13 08:01 | 02/Dec/13 21:19 | tests | 0 | 1 | {code} static final File BASETEST = new File(System.getProperty("build.test.dir", "build")); {code} {code} public static File createTmpDir() throws IOException { return createTmpDir(BASETEST); } static File createTmpDir(File parentDir) throws IOException { File tmpFile = File.createTempFile("test", ".junit", parentDir); // don't delete tmpFile - this ensures we don't attempt to create // a tmpDir with a duplicate name File tmpDir = new File(tmpFile + ".dir"); Assert.assertFalse(tmpDir.exists()); // never true if tmpfile does it's job Assert.assertTrue(tmpDir.mkdirs()); return tmpDir; } {code} Because the default directory "build" may not exist, createTmpDir() will throws IOException and show "No such file or directory". I think replace "build" as "." is more reasonable. |
361384 | No Perforce job exists for this issue. | 0 | 361683 | 6 years, 16 weeks, 3 days ago | 0|i1qanr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1824 | We need a MiniZooKeeperCluster for unit test |
Test | Open | Major | Unresolved | Unassigned | chendihao | chendihao | 02/Dec/13 03:10 | 16/Apr/14 04:57 | tests | 2 | 6 | We are developing a timestamp server to offer a precise auto increasing timestamp for other processes. ZooKeeper is used to select the master and store the persistent max offered timestamp. Now I'm not sure if the zookeeper servers work well no matter how much I damage the cluster randomly. So we need unit tests but ZooKeeper doesn't provides a single-process cluster for us. Should we implement the similar code like what HBase did in MiniZooKeeperCluster.java ? ref https://issues.apache.org/jira/browse/HBASE-2218 |
361332 | No Perforce job exists for this issue. | 0 | 361631 | 5 years, 49 weeks, 1 day ago | 0|i1qac7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1823 | zkTxnLogToolkit -dump should support printing transaction data as a string |
Bug | Closed | Trivial | Fixed | maoling | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 27/Nov/13 20:44 | 20/May/19 13:50 | 12/Sep/18 07:33 | 3.6.0, 3.5.5 | server | 2 | 9 | 0 | 4200 | Some times it's handy to have LogFormatter show you the content of the transactions (i.e.: if you are storing text). | 100% | 100% | 4200 | 0 | pull-request-available | 360957 | No Perforce job exists for this issue. | 1 | 361256 | 1 year, 27 weeks, 1 day ago | 0|i1q813: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1821 | very ugly warning when compiling load_gen.c |
Bug | Closed | Major | Fixed | Germán Blanco | Germán Blanco | Germán Blanco | 23/Nov/13 01:43 | 13/Mar/14 14:16 | 27/Nov/13 18:23 | 3.4.6, 3.5.0 | 3.4.6, 3.5.0 | c client | 0 | 5 | macos | this is the compiler output: 3.4/src/c/src/load_gen.c:110:1: warning: control may reach end of non-void function [-Wreturn-type] [exec] } [exec] ^ 3.4/src/c/src/load_gen.c:135:1: warning: control may reach end of non-void function [-Wreturn-type] [exec] } [exec] ^ 3.4/src/c/src/load_gen.c:163:1: warning: control may reach end of non-void function [-Wreturn-type] [exec] } [exec] ^ 3.4/src/c/src/load_gen.c:180:1: warning: control may reach end of non-void function [-Wreturn-type] [exec] } [exec] ^ i think that the code is missing a "return ZOK" in the end of these functions. |
360129 | No Perforce job exists for this issue. | 1 | 360428 | 6 years, 2 weeks ago | 0|i1q2xz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1820 | The artifact org.apache.zookeeper:zookeeper:pom:3.4.5 has no license specified. |
Improvement | Open | Minor | Unresolved | Unassigned | mhd wrk | mhd wrk | 21/Nov/13 19:14 | 21/Nov/13 19:14 | 1 | 1 | We are using maven licensing pluging (org.linuxstuff.maven:licensing-maven-plugin) to check/enforce licensing requirements for our dependencies and noticed that zookeper doesn't include licensing terms in pom.xml which causes the plugin to report the above mentioned warning (see the summary). Is there any plan to add the license to pom.xml as well? |
359912 | No Perforce job exists for this issue. | 0 | 360211 | 6 years, 18 weeks ago | 0|i1q1lr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1819 | DeserializationPerfTest calls method with wrong arguments |
Bug | Resolved | Minor | Fixed | Daniel Knightly | Daniel Knightly | Daniel Knightly | 21/Nov/13 18:56 | 26/Apr/14 07:04 | 25/Apr/14 15:37 | 3.5.0 | 3.5.0 | tests | 0 | 4 | 3600 | 3600 | 0% | The DeserializationPerfTest calls SerializationPerfTest.createNodes to create serialized nodes to deserialize. However 2 of the arguments, childcount and parentCVersion are switched in the call to the above method. This results in all tests unintentionally testing the same scenario. |
0% | 0% | 3600 | 3600 | 359908 | No Perforce job exists for this issue. | 1 | 360207 | 5 years, 47 weeks, 5 days ago | Fix DeserializationPerfTest which was not testing the desired scenarios. | 0|i1q1kv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1818 | Fix don't care for trunk |
Bug | Closed | Blocker | Fixed | Fangmin Lv | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 16/Nov/13 06:53 | 20/May/19 13:50 | 17/Dec/18 09:20 | 3.5.1 | 3.6.0, 3.5.5 | 0 | 7 | 0 | 17400 | ZOOKEEPER-1810 | See umbrella jira. | 100% | 100% | 17400 | 0 | pull-request-available | 358924 | No Perforce job exists for this issue. | 1 | 359214 | 1 year, 13 weeks, 3 days ago | This is very much a copy of the patch for 3.4.7 with small adaptations. | 0|i1pvg7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1817 | ZOOKEEPER-1805 Fix don't care for b3.4 |
Sub-task | Closed | Blocker | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 16/Nov/13 06:53 | 13/Mar/14 14:16 | 23/Nov/13 13:24 | 3.4.6 | 0 | 3 | See umbrella jira. | 358923 | No Perforce job exists for this issue. | 13 | 359213 | 6 years, 2 weeks ago | Thanks for the reviews and all the help, guys. Committed revision 1544858. | 0|i1pvfz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1816 | ClientCnxn.close() should block until threads have died |
Bug | Patch Available | Minor | Unresolved | wu wen | Jared Winick | Jared Winick | 15/Nov/13 16:33 | 22/Jun/18 00:49 | 3.3.6, 3.4.5, 3.4.11 | java client | 1 | 4 | ACCUMULO-1858, ZOOKEEPER-1394 | In the testing of ACCUMULO-1379 and ACCUMULO-1858 it was seen that the non-blocking behavior of ClientCnxn.close(), and therefore ZooKeeper.close(), can cause a race condition when undeploying an application running in a Java container such as JBoss or Tomcat. As the close() method returns without joining on the sendThread and eventThread, those threads continue to execute/cleanup while the container is cleaning up the application's resources. If the container has unloaded classes by the time this code runs {code} ZooTrace.logTraceMessage(LOG, ZooTrace.getTextTraceLevel(), "SendThread exitedloop."); {code} A "java.lang.NoClassDefFoundError: org/apache/zookeeper/server/ZooTrace" can be seen. |
358855 | No Perforce job exists for this issue. | 1 | 359145 | 3 years, 19 weeks, 2 days ago | 0|i1pv0v: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1815 | Tolerate incorrectly set system hostname in tests |
Improvement | Closed | Trivial | Fixed | Unassigned | some one | some one | 15/Nov/13 16:14 | 02/Apr/19 06:40 | 18/Nov/13 21:12 | 3.5.0, 3.4.14 | tests | 0 | 4 | 0 | 1800 | A bunch of tests will fail with UnknownHostException errors when the hostname is incorrectly set on the system that you are running tests on. | 100% | 100% | 1800 | 0 | pull-request-available | 358848 | No Perforce job exists for this issue. | 3 | 359138 | 6 years, 18 weeks, 2 days ago | 0|i1puzb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1814 | Reduction of waiting time during Fast Leader Election |
Bug | Patch Available | Major | Unresolved | Daniel Peon | Daniel Peon | Daniel Peon | 15/Nov/13 05:37 | 05/Feb/20 07:12 | 3.4.5, 3.5.0 | 3.7.0, 3.5.8 | leaderElection | 0 | 7 | 86400 | 86400 | 0% | FastLeader election takes long time because of the exponential backoff. Currently the time is 60 seconds. It would be interesting to give the possibility to configure this parameter, like for example for a Server shutdown. Otherwise, it sometimes takes so long and it has been detected a test failure when executing: org.apache.zookeeper.server.quorum.QuorumPeerMainTest. This test case waits until 30 seconds and this is smaller than the 60 seconds where the leader election can be waiting for at the moment of shutting down. Considering the failure during the test case, this issue was considered a possible bug. |
0% | 0% | 86400 | 86400 | 358733 | No Perforce job exists for this issue. | 11 | 359023 | 1 year, 17 weeks ago | 0|i1pu9r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1813 | Zookeeper restart fails due to missing node from snapshot |
Bug | Resolved | Major | Duplicate | Unassigned | Vinayakumar B | Vinayakumar B | 15/Nov/13 01:40 | 06/Dec/13 16:33 | 15/Nov/13 04:15 | 3.4.5, 3.5.0 | 3.5.0 | 0 | 3 | ZOOKEEPER-1573 | Due to following exception Zookeeper restart is failing {noformat}java.io.IOException: Failed to process transaction type: 1 error: KeeperErrorCode = NoNode for /test/subdir2/subdir2/subdir at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:183) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:222) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:255) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:380) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:748) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:111) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:90) at org.apache.zookeeper.server.ZooKeeperServerMainTest$2.run(ZooKeeperServerMainTest.java:218) Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /test/subdir2/subdir2/subdir at org.apache.zookeeper.server.persistence.FileTxnSnapLog.processTransaction(FileTxnSnapLog.java:268) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:181) ... 7 more{noformat} |
358703 | No Perforce job exists for this issue. | 1 | 358993 | 6 years, 18 weeks, 6 days ago | 0|i1pu33: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1812 | ZooInspector reconnection always fails if first connection fails |
Bug | Closed | Minor | Fixed | Benjamin Jaton | Benjamin Jaton | Benjamin Jaton | 10/Nov/13 15:28 | 13/Mar/14 14:17 | 12/Nov/13 01:30 | 3.4.5, 3.5.0 | 3.4.6, 3.5.0 | contrib | 0 | 4 | Steps to reproduce: - Connect to localhost:2181 when ZooKeeper server is down. After a few seconds, ZooInspector warns that the connection has failed - start the ZooKeeper server - Reconnect to localhost:2181, ZooInspector will still not be able to connect to the server. The workaround is to relaunch ZooInspector. |
zooinspector | 357845 | No Perforce job exists for this issue. | 2 | 358135 | 6 years, 2 weeks ago | Fixed ZooInspector reconnection |
Reviewed
|
0|i1posn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1811 | The ZooKeeperSaslClient service name principal is hardcoded to "zookeeper" |
Bug | Closed | Major | Fixed | Harsh J | Harsh J | Harsh J | 07/Nov/13 00:37 | 23/Jul/15 14:54 | 10/Feb/14 16:33 | 3.4.5 | 3.4.6, 3.5.0 | java client | 0 | 6 | AMBARI-12347 | The ClientCnxn class in ZK instantiates the ZooKeeperSaslClient with a hardcoded service name of "zookeeper". This causes all apps to fail in accessing ZK in a secure environment where the administrator has changed the principal name ZooKeeper runs as. The service name should be configurable. |
357308 | No Perforce job exists for this issue. | 1 | 357598 | 6 years, 2 weeks ago | Adds a new system property "zookeeper.sasl.client.username" that can be used to configure the ZK Kerberos (SASL) client user principal name to something other than "zookeeper" (default) for any environments that use non-standard naming for its principals. |
Reviewed
|
0|i1plhj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1810 | Add version to FLE notifications for trunk |
Bug | Resolved | Major | Fixed | Germán Blanco | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 06/Nov/13 11:03 | 15/Nov/18 12:53 | 07/Jul/14 23:26 | 3.5.0 | 3.5.0 | leaderElection | 0 | 8 | ZOOKEEPER-1896, ZOOKEEPER-1870, ZOOKEEPER-1818 | The same as ZOOKEEPER-1808 but for trunk. | 357171 | No Perforce job exists for this issue. | 8 | 357461 | 5 years, 37 weeks, 2 days ago | 0|i1pkn3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1809 | ephemeral node not deleted (or recreated?) after session expiry |
Bug | Resolved | Minor | Not A Problem | Unassigned | Shaun Senecal | Shaun Senecal | 06/Nov/13 00:06 | 06/Dec/13 16:32 | 06/Nov/13 22:54 | 3.4.5, 3.5.0 | 0 | 4 | ZOOKEEPER-1740 | We have been running into a situation where we attempt to recreate our ephemeral nodes after a session expiry, only to find that the node already exists. Admittedly, this is only happening when we are aggresively killing and recreating sessions in a tight loop, but I thought it might point to a larger issue which may need to be addressed. Attached is a small app which demonstrates the issue, and a log file (client and server in the same log) which shows the issue as it occured. Reproducing the bug is a tedious process of rerunning the test over and over again, but I have typically been able to reproduce it within 15mins of trying. The test app is using Curator, however, I think the issue is occuring at the ZK level since the logs clearly indicate the ephermal node is deleted after the session expiry. Interesting bits from the log {noformat} ... 2013/11/06 13:46:03,065 INFO [ConnectionStateManager-0] Recreating node: /test ... 2013/11/06 13:46:03,070 DEBUG [SyncThread:0] Processing request:: sessionid:0x1422bbb36d10002 type:create cxid:0x2 zxid:0x8 txntype:1 reqpath:n/a ... 2013/11/06 13:46:03,071 DEBUG [main] Closing client for session: 0x1422bbb36d10002 2013/11/06 13:46:03,075 INFO [ProcessThread(sid:0 cport:-1):] Processed session termination for sessionid: 0x1422bbb36d10002 2013/11/06 13:46:03,079 DEBUG [SyncThread:0] Processing request:: sessionid:0x1422bbb36d10002 type:closeSession cxid:0x1 zxid:0x9 txntype:-11 reqpath:n/a 2013/11/06 13:46:03,080 DEBUG [SyncThread:0] Deleting ephemeral node /test for session 0x1422bbb36d10002 2013/11/06 13:46:03,080 DEBUG [SyncThread:0] sessionid:0x1422bbb36d10002 type:closeSession cxid:0x1 zxid:0x9 txntype:-11 reqpath:n/a ... 2013/11/06 13:46:04,459 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:43462] Client attempting to renew session 0x1422bbb36d10002 at /127.0.0.1:59559 2013/11/06 13:46:04,459 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:43462] Invalid session 0x1422bbb36d10002 for client /127.0.0.1:59559, probably expired 2013/11/06 13:46:04,460 INFO [main-SendThread(localhost:43462)] Unable to reconnect to ZooKeeper service, session 0x1422bbb36d10002 has expired, closing socket connection 2013/11/06 13:46:04,460 DEBUG [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:43462] Dropping request: No session with sessionid 0x1422bbb36d10002 exists, probably expired and removed ... 2013/11/06 13:46:04,463 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:43462] Client attempting to establish new session at /127.0.0.1:59560 2013/11/06 13:46:04,466 DEBUG [SyncThread:0] Processing request:: sessionid:0x1422bbb36d10003 type:createSession cxid:0x0 zxid:0xa txntype:-10 reqpath:n/a 2013/11/06 13:46:04,466 DEBUG [SyncThread:0] sessionid:0x1422bbb36d10003 type:createSession cxid:0x0 zxid:0xa txntype:-10 reqpath:n/a 2013/11/06 13:46:04,467 INFO [SyncThread:0] Established session 0x1422bbb36d10003 with negotiated timeout 30000 for client /127.0.0.1:59560 ... 2013/11/06 13:46:04,473 INFO [ConnectionStateManager-0] Recreating node: /test 2013/11/06 13:46:04,474 DEBUG [SyncThread:0] Processing request:: sessionid:0x1422bbb36d10003 type:exists cxid:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/___CURATOR_KILL_SESSION___15970538640754 2013/11/06 13:46:04,474 DEBUG [SyncThread:0] sessionid:0x1422bbb36d10003 type:exists cxid:0x2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/___CURATOR_KILL_SESSION___15970538640754 2013/11/06 13:46:04,475 INFO [ProcessThread(sid:0 cport:-1):] Got user-level KeeperException when processing sessionid:0x1422bbb36d10003 type:create cxid:0x3 zxid:0xc txntype:-1 reqpath:n/a Error Path:/test Error:KeeperErrorCode = NodeExists for /test 2013/11/06 13:46:04,475 DEBUG [main-SendThread(localhost:43462)] Reading reply sessionid:0x1422bbb36d10003, packet:: clientPath:null serverPath:null finished:false header:: 2,3 replyHeader:: 2,11,-101 request:: '/___CURATOR_KILL_SESSION___15970538640754,T response:: 2013/11/06 13:46:04,476 INFO [main] Initiating client connection, connectString=127.0.0.1:43462 sessionTimeout=10000 watcher=com.netflix.curator.test.KillSession$2@4067d00a sessionId=1422bbb36d10003 sessionPasswd=<hidden> ... 2013/11/06 13:46:04,479 ERROR [ConnectionStateManager-0] Failed to recreate ephemeral node org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /test at org.apache.zookeeper.KeeperException.create(KeeperException.java:119) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at com.netflix.curator.framework.imps.CreateBuilderImpl$10.call(CreateBuilderImpl.java:625) at com.netflix.curator.framework.imps.CreateBuilderImpl$10.call(CreateBuilderImpl.java:609) at com.netflix.curator.RetryLoop.callWithRetry(RetryLoop.java:106) at com.netflix.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:605) at com.netflix.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:428) at com.netflix.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:408) at com.netflix.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:41) at com.rakuten.sandbox.sessionexpiry.nodeexists.SessionExpiryTest$2.stateChanged(SessionExpiryTest.java:72) at com.netflix.curator.framework.state.ConnectionStateManager$2.apply(ConnectionStateManager.java:184) at com.netflix.curator.framework.state.ConnectionStateManager$2.apply(ConnectionStateManager.java:180) at com.netflix.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92) at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:262) at com.netflix.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83) at com.netflix.curator.framework.state.ConnectionStateManager.processEvents(ConnectionStateManager.java:177) at com.netflix.curator.framework.state.ConnectionStateManager.access$000(ConnectionStateManager.java:40) at com.netflix.curator.framework.state.ConnectionStateManager$1.call(ConnectionStateManager.java:104) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) ... {noformat} |
357097 | No Perforce job exists for this issue. | 3 | 357387 | 6 years, 19 weeks, 2 days ago | 0|i1pk6n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1808 | ZOOKEEPER-1805 Add version to FLE notifications for 3.4 branch |
Sub-task | Closed | Blocker | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 05/Nov/13 12:03 | 13/Mar/14 14:16 | 15/Nov/13 13:27 | 3.4.6 | 0 | 5 | Add version to notification messages so that we can differentiate messages during rolling upgrades. This task is for the 3.4 branch only. | 356983 | No Perforce job exists for this issue. | 8 | 357273 | 6 years, 2 weeks ago | 0|i1pjhr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1807 | Observers spam each other creating connections to the election addr |
Bug | Resolved | Blocker | Fixed | Alexander Shraer | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 01/Nov/13 19:32 | 10/May/18 16:01 | 10/May/18 16:01 | 3.5.4, 3.6.0 | 0 | 11 | ZOOKEEPER-2080, ZOOKEEPER-1783 | Hey [~shralex], I noticed today that my Observers are spamming each other trying to open connections to the election port. I've got tons of these: {noformat} 2013-11-01 22:19:45,819 - DEBUG [WorkerSender[myid=13]] - There is a connection already for server 9 2013-11-01 22:19:45,819 - DEBUG [WorkerSender[myid=13]] - There is a connection already for server 10 2013-11-01 22:19:45,819 - DEBUG [WorkerSender[myid=13]] - There is a connection already for server 6 2013-11-01 22:19:45,819 - DEBUG [WorkerSender[myid=13]] - There is a connection already for server 12 2013-11-01 22:19:45,819 - DEBUG [WorkerSender[myid=13]] - There is a connection already for server 14 {noformat} and so and so on ad nauseam. Now, looking around I found this inside FastLeaderElection.java from when you committed ZOOKEEPER-107: {noformat} private void sendNotifications() { - for (QuorumServer server : self.getVotingView().values()) { - long sid = server.id; - + for (long sid : self.getAllKnownServerIds()) { + QuorumVerifier qv = self.getQuorumVerifier(); {noformat} Is that really desired? I suspect that is what's causing Observers to try to connect to each other (as opposed as just connecting to participants). I'll give it a try now and let you know. (Also, we use observer ids that are > 0, and I saw some parts of the code that might not deal with that assumption - so it could be that too..). |
356550 | No Perforce job exists for this issue. | 9 | 356838 | 1 year, 45 weeks ago | 0|i1pgtb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1806 | testCurrentServersAreObserversInNextConfig failing frequently on trunk with non-jdk6 |
Bug | Closed | Major | Duplicate | Alexander Shraer | Patrick D. Hunt | Patrick D. Hunt | 31/Oct/13 17:47 | 19/Dec/19 18:01 | 10/Aug/16 13:04 | 3.5.0 | 3.5.3 | server | 0 | 3 | ZOOKEEPER-2080 | testCurrentServersAreObserversInNextConfig failing frequently on trunk with jdk7 I see a number of failures recently on this test. Is it a real issue or flakey? Perhaps due to re-ordering/cleanup with jdk7 as we've seen with some other tests? (I don't see this test failing with jdk6) |
test | 356338 | No Perforce job exists for this issue. | 0 | 356626 | 3 years, 32 weeks, 1 day ago | 0|i1pfi7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1805 | "Don't care" value in ZooKeeper election breaks rolling upgrades |
Bug | Closed | Blocker | Fixed | Flavio Paiva Junqueira | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 29/Oct/13 23:18 | 13/Mar/14 14:31 | 13/Mar/14 14:25 | 3.4.6 | 1 | 9 | ZOOKEEPER-1808, ZOOKEEPER-1817 | ZOOKEEPER-1870, ZOOKEEPER-1732 | This is an issue that has been originally reported in ZOOKEEPER-1732. | 355971 | No Perforce job exists for this issue. | 11 | 356259 | 6 years, 2 weeks ago | 0|i1pd93: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1804 | Stat the realtime tps of zookeepr server |
Improvement | Open | Major | Unresolved | Leader Ni | Leader Ni | Leader Ni | 29/Oct/13 04:56 | 14/Dec/19 06:07 | 3.7.0 | server | 08/Nov/13 | 0 | 6 | At this time, we assessed whether zookeeper supports some business scenarios, always use the number of subscribers, or to assess the number of clients。 You konw, some times, many client connection with zookeeper, but do noting, and the onthers do complex business logic。 So,we must stat the realtime tps of zookeepr。 [-----------------Solution-------------------] Solution1: If you only want to know the real time transaction processed, you can use the patch "ZOOKEEPER-1804.patch". Solution2: If you also want to know how client use zookeeper, and the real time r/w ps of each zookeeper client, you can use the patch "ZOOKEEPER-1804-2.patch" use java properties: -Dserver_process_stats=true to open the function. Sample: $>echo rwps|nc localhost 2181 RealTime R/W Statistics: getChildren2: 0.5994005994005994 createSession: 1.6983016983016983 closeSession: 0.999000999000999 setData: 110.18981018981019 setWatches: 129.17082917082917 getChildren: 68.83116883116884 delete: 19.980019980019982 create: 22.27772227772228 exists: 1806.2937062937062 getDate: 729.5704295704296 |
355792 | No Perforce job exists for this issue. | 2 | 356080 | 1 year, 7 weeks, 1 day ago | 0|i1pc5b: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1803 | Add description for pzxid in programmer's guide. |
Bug | Closed | Major | Fixed | Mohammad Arshad | Leader Ni | Leader Ni | 26/Oct/13 07:50 | 21/Jul/16 16:18 | 25/Sep/15 02:52 | 3.4.7, 3.5.2, 3.6.0 | documentation | 31/Oct/13 | 0 | 6 | The Stat(org.apache.zookeeper.data.Stat) Structures has the filed pzxid, but no document about it in programmer's guide(http://zookeeper.apache.org/doc/r3.4.3/zookeeperProgrammers.html#sc_zkStatStructure) | 355442 | No Perforce job exists for this issue. | 2 | 355730 | 4 years, 25 weeks, 6 days ago | 0|i1p9zr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1802 | ZOOKEEPER-3170 flaky test testResyncByTxnlogThenDiffAfterFollowerCrashes |
Sub-task | Resolved | Major | Cannot Reproduce | Andor Molnar | Patrick D. Hunt | Patrick D. Hunt | 25/Oct/13 12:48 | 25/Oct/18 11:29 | 25/Oct/18 11:29 | 3.5.0 | tests | 0 | 2 | This test fails intermittently on trunk: https://builds.apache.org/view/S-Z/view/ZooKeeper/job/ZooKeeper-trunk-jdk7/691/testReport/junit/org.apache.zookeeper.test/FollowerResyncConcurrencyTest/testResyncByTxnlogThenDiffAfterFollowerCrashes/ |
flaky | 355344 | No Perforce job exists for this issue. | 0 | 355632 | 1 year, 21 weeks ago | 0|i1p9dz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1801 | TestReconfig failure |
Bug | Open | Major | Unresolved | Marshall McMullen | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 25/Oct/13 08:16 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | quorum | 0 | 3 | This is the message: {noformat} /home/jenkins/jenkins-slave/workspace/ZooKeeper-trunk/trunk/src/c/tests/TestReconfig.cc:183: Assertion: equality assertion failed [Expected: 1, Actual : 0] {noformat} https://builds.apache.org/job/ZooKeeper-trunk/2100/ |
355300 | No Perforce job exists for this issue. | 0 | 355588 | 1 year, 17 weeks, 1 day ago | 0|i1p947: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1800 | jenkins failure in testGetProposalFromTxn |
Bug | Open | Major | Unresolved | Thawan Kooburat | Patrick D. Hunt | Patrick D. Hunt | 24/Oct/13 13:04 | 05/Feb/20 07:17 | 3.5.0 | 3.7.0, 3.5.8 | tests | 0 | 1 | https://builds.apache.org/view/S-Z/view/ZooKeeper/job/ZooKeeper-trunk-jdk7/691/testReport/junit/org.apache.zookeeper.test/GetProposalFromTxnTest/testGetProposalFromTxn/ test was introduced in ZOOKEEPER-1413, seems to have failed twice so far this month. |
355096 | No Perforce job exists for this issue. | 0 | 355384 | 6 years, 21 weeks, 2 days ago | 0|i1p7uv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1799 | SaslAuthFailDesignatedClientTest.testAuth fails frequently on SUSE |
Bug | Closed | Minor | Fixed | Jeffrey Zhong | Jeffrey Zhong | Jeffrey Zhong | 22/Oct/13 14:20 | 13/Mar/14 14:17 | 22/Oct/13 19:17 | 3.4.5 | 3.4.6, 3.5.0 | tests | 0 | 4 | org.apache.zookeeper.test.SaslAuthFailDesignatedClientTest.testAuth often fails on SUSE with the following error stack trace: {code} junit.framework.AssertionFailedError: expected [0x141ccb60d870000] expected:<1> but was:<0> at org.apache.zookeeper.test.JMXEnv.ensureAll(JMXEnv.java:115) at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:200) at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:174) at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:159) at org.apache.zookeeper.test.ClientBase.createClient(ClientBase.java:152) at org.apache.zookeeper.test.SaslAuthFailDesignatedClientTest.testAuth(SaslAuthFailDesignatedClientTest.java:87) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) {code} The reason is that this is a negative test. After authentication fails, the client connection is closed at the server side so does the session right before test case calls JMXEnv.ensureAll to verify the session. Below are the log events show the sequence and you can see the session was already closed before client JMXEnv.ensureAll. {code} 2013-10-18 10:56:25,320 [myid:] - INFO [SyncThread:0:ZooKeeperServer@595] - Established session 0x141ccb60d870000 with negotiated timeout 30000 for client /127.0.0.1:58272 2013-10-18 10:56:25,327 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:ZooKeeperServer@940] - Client failed to SASL authenticate: javax.security.sasl.SaslException: DIGEST-MD5: digest response format violation. Mismatched response. 2013-10-18 10:56:25,327 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:ZooKeeperServer@946] - Closing client connection due to SASL authentication failure. 2013-10-18 10:56:25,329 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:11221:NIOServerCnxn@1001] - Closed socket connection for client /127.0.0.1:58272 which had sessionid 0x141ccb60d870000 .... 2013-10-18 10:56:25,330 [myid:] - INFO [main-SendThread(localhost:11221):ClientCnxn$SendThread@1089] - Unable to read additional data from server sessionid 0x141ccb60d870000, likely server has closed socket, closing socket connection and attempting reconnect 2013-10-18 10:56:25,332 [myid:] - INFO [main:JMXEnv@105] - expect:0x141ccb60d870000 {code} |
354710 | No Perforce job exists for this issue. | 1 | 354999 | 6 years, 2 weeks ago |
Reviewed
|
0|i1p5hb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1798 | Fix race condition in testNormalObserverRun |
Bug | Closed | Blocker | Fixed | Thawan Kooburat | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 20/Oct/13 04:52 | 26/Aug/16 17:16 | 13/Nov/13 23:47 | 3.4.6, 3.5.0 | 0 | 4 | ZOOKEEPER-2538, ZOOKEEPER-1793 | This is the output messges: <noformat> Testcase: testNormalObserverRun took 4.221 sec FAILED expected:<data[2]> but was:<data[1]> junit.framework.AssertionFailedError: expected:<data[2]> but was:<data[1]> at org.apache.zookeeper.server.quorum.Zab1_0Test$8.converseWithObserver(Zab1_0Test.java:1118) at org.apache.zookeeper.server.quorum.Zab1_0Test.testObserverConversation(Zab1_0Test.java:546) at org.apache.zookeeper.server.quorum.Zab1_0Test.testNormalObserverRun(Zab1_0Test.java:994) <noformat> |
354303 | No Perforce job exists for this issue. | 7 | 354593 | 6 years, 2 weeks ago | 0|i1p2zb: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1797 | PurgeTxnLog may delete data logs during roll |
Bug | Resolved | Blocker | Fixed | Rakesh Radhakrishnan | Derek Dagit | Derek Dagit | 18/Oct/13 14:57 | 10/Sep/16 20:06 | 17/May/14 21:05 | 3.4.5 | 3.4.7, 3.5.0 | server | 0 | 8 | ZOOKEEPER-2574 | org.apache.zookeeper.server.PurgeTxnLog deletes old data logs and snapshots, keeping the newest N snapshots and any data logs that have been written since the snapshot. It does this by listing the available snapshots & logs and creates a blacklist of snapshots and logs that should not be deleted. Then, it searches for and deletes all logs and snapshots that are not in this list. It appears that if logs are rolling or a new snapshot is created during this process, then these newer files will be unintentionally deleted. |
354189 | No Perforce job exists for this issue. | 4 | 354481 | 4 years, 49 weeks, 2 days ago | 0|i1p2an: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1796 | Move common code from {Follower, Observer}ZooKeeperServer into LearnerZooKeeperServer |
Improvement | Resolved | Trivial | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 14/Oct/13 18:31 | 02/Mar/16 20:32 | 21/Mar/14 01:37 | 3.5.0 | 0 | 3 | Since ZOOKEEPER-1552 we are enabling syncProcessor in Observers, so we should have a proper shutdown() method there. Since FollowerZooKeeperServer already has one, which does the same thing that we need, move that to LearnerZooKeeperServer along with some related instance variables. | 353404 | No Perforce job exists for this issue. | 1 | 353696 | 6 years, 6 days ago | 0|i1oxhz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1795 | unable to build c client on ubuntu |
Bug | Resolved | Blocker | Fixed | Raúl Gutiérrez Segalés | Patrick D. Hunt | Patrick D. Hunt | 14/Oct/13 15:38 | 06/Dec/13 16:30 | 14/Oct/13 19:57 | 3.5.0 | 3.5.0 | c client | 0 | 4 | ZOOKEEPER-1742, ZOOKEEPER-1646 | Seems there is an issue for Ubuntu (I'm on 13.04), however I'm only seeing it on trunk and not branch 34 {noformat} make check make zktest-st zktest-mt make[1]: Entering directory `/home/phunt/dev/svn/svn-zookeeper/src/c' g++ -DHAVE_CONFIG_H -I. -I./include -I./tests -I./generated -DUSE_STATIC_LIB -DZKSERVER_CMD="\"./tests/zkServer.sh\"" -DZOO_IPV6_ENABLED -g -O2 -MT zktest_st-TestReconfigServer.o -MD -MP -MF .deps/zktest_st-TestReconfigServer.Tpo -c -o zktest_st-TestReconfigServer.o `test -f 'tests/TestReconfigServer.cc' || echo './'`tests/TestReconfigServer.cc tests/TestReconfigServer.cc: In member function 'bool TestReconfigServer::waitForConnected(zhandle_t*, uint32_t)': tests/TestReconfigServer.cc:128:16: error: 'sleep' was not declared in this scope make[1]: *** [zktest_st-TestReconfigServer.o] Error 1 make[1]: Leaving directory `/home/phunt/dev/svn/svn-zookeeper/src/c' make: *** [check-am] Error 2 {noformat} I have {noformat} g++ --version g++ (Ubuntu/Linaro 4.7.3-1ubuntu1) 4.7.3 Copyright (C) 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. {noformat} |
353379 | No Perforce job exists for this issue. | 1 | 353671 | 6 years, 23 weeks, 2 days ago |
Reviewed
|
0|i1oxcf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1794 | ZOOKEEPER-1777 Add hash check to transaction history in quorum servers |
Sub-task | Open | Major | Unresolved | Germán Blanco | Germán Blanco | Germán Blanco | 14/Oct/13 02:15 | 14/Dec/19 06:09 | 3.5.0 | 3.7.0 | quorum | 1 | 6 | 1209600 | 1209600 | 0% | The goal of this task is to add a hash number to each transaction in the transaction history. This hash number will depend on the entire transaction history. This hash number will be the same in all members of the quorum, since it shall have the same result if the members have the same transaction history. That means that there will be no need to send any new information between members of the quorum, during the broadcast phase. The hash number will be checked by the leader when learners try to connect, and it shall also be sent together with the snapshot during synchronisation. If the hash number does not match, the synchronisation shall be done with a snapshot in order to overwrite the conflicts in the transaction history. | 0% | 0% | 1209600 | 1209600 | 353270 | No Perforce job exists for this issue. | 3 | 353563 | 5 years, 49 weeks, 3 days ago | The snapshot file name will be updated by adding also the hash. That means a name like "snapshot.100001.123533" where 100001 is the zxid and 123533 is the hash. The server will continue to be able to read files without the hash in the name. | 0|i1owof: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1793 | Zab1_0Test.testNormalObserverRun() is flaky |
Bug | Resolved | Major | Duplicate | Unassigned | Alexander Shraer | Alexander Shraer | 11/Oct/13 00:43 | 06/Dec/13 16:30 | 24/Oct/13 14:54 | quorum, server, tests | 0 | 4 | ZOOKEEPER-1798 | not sure if this is due to a known issue or not. // check and make sure the change is persisted zkDb2 = new ZKDatabase(new FileTxnSnapLog(logDir, snapDir)); lastZxid = zkDb2.loadDataBase(); Assert.assertEquals("data2", new String(zkDb2.getData("/foo", stat, null))); this assert periodically (once every 3 runs of the test or so) fails saying that getData returns data1 and not data2. |
352988 | No Perforce job exists for this issue. | 0 | 353275 | 6 years, 22 weeks ago | 0|i1ouwf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1792 | Observers don't need to keep an in-memory copy of last commited proposals |
Improvement | Open | Minor | Unresolved | Unassigned | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 10/Oct/13 21:18 | 29/Jun/15 01:55 | 0 | 2 | In FinalRequestProcessor.java#processRequest we have: {noformat} if (request.isQuorum()) { zks.getZKDatabase().addCommittedProposal(request); } {noformat} but this is only useful to the leader since committed proposals are only used from LearnerHandler to sync up followers. I presume followers do need it as they might become a leader at any point. But observers have no need for them, so we could probably special case this for them and optimize the path for them. |
352973 | No Perforce job exists for this issue. | 0 | 353260 | 4 years, 38 weeks, 3 days ago | 0|i1out3: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1791 | ZooKeeper package includes unnecessary jars that are part of the package. |
Bug | Resolved | Major | Fixed | Mahadev Konar | Mahadev Konar | Mahadev Konar | 10/Oct/13 20:07 | 20/May/14 07:09 | 17/May/14 22:42 | 3.5.0 | 3.5.0 | build | 0 | 4 | ZooKeeper package includes unnecessary jars that are part of the package. Packages like fatjar and {code} maven-ant-tasks-2.1.3.jar maven-artifact-2.2.1.jar maven-artifact-manager-2.2.1.jar maven-error-diagnostics-2.2.1.jar maven-model-2.2.1.jar maven-plugin-registry-2.2.1.jar maven-profile-2.2.1.jar maven-project-2.2.1.jar maven-repository-metadata-2.2.1.jar {code} are part of the zookeeper package and rpm (via bigtop). |
352961 | No Perforce job exists for this issue. | 1 | 353248 | 5 years, 44 weeks, 2 days ago | 0|i1ouqf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1790 | Deal with special ObserverId in QuorumCnxManager.receiveConnection |
Bug | Closed | Major | Not A Problem | Alexander Shraer | Alexander Shraer | Alexander Shraer | 10/Oct/13 19:59 | 13/Mar/14 14:16 | 13/Oct/13 01:50 | 3.4.6, 3.5.0 | 3.4.6, 3.5.0 | server | 0 | 4 | QuorumCnxManager.receiveConnection assumes that a negative sid means that this is a 3.5.0 server, which has a different communication protocol. This doesn't account for the fact that ObserverId = -1 is a special id that may be used by observers and is also negative. This requires a fix to trunk and a separate fix to 3.4 branch, where this function is different (see ZOOKEEPER-1633) |
352959 | No Perforce job exists for this issue. | 0 | 353246 | 6 years, 2 weeks ago | 0|i1oupz: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1789 | 3.4.x observer causes NPE on 3.5.0 (trunk) participants |
Bug | Resolved | Critical | Fixed | Alexander Shraer | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 10/Oct/13 19:10 | 24/Jul/14 07:22 | 23/Jul/14 13:36 | 3.5.0 | 3.5.0 | 0 | 6 | (assigning to Alex because this was introduced by ZOOKEEPER-107, but will upload a patch as well.) I have a 5 participants cluster running what will be 3.5.0 (i.e.: trunk as of today) and an observer running 3.4 (trunk from 3.4 branch). When the observer tries to establish a connection to the participants I get: {noformat} Thread Thread[10.40.78.121:3888,5,main] died java.lang.NullPointerException at org.apache.zookeeper.server.quorum.QuorumCnxManager.receiveConnection(QuorumCnxManager.java:240) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:552) {noformat} Looking at QuorumCnxManager.java:240: {noformat} if (protocolVersion >= 0) { // this is a server id and not a protocol version sid = protocolVersion; electionAddr = self.getVotingView().get(sid).electionAddr; } else { {noformat} and self.getVotingView().get(sid) will be null for Observers. So this block should cover that case. |
352951 | No Perforce job exists for this issue. | 2 | 353238 | 5 years, 35 weeks ago |
Reviewed
|
0|i1ouo7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1788 | Support clientID field on connection requests |
Improvement | Open | Minor | Unresolved | Unassigned | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 09/Oct/13 15:43 | 09/Oct/13 15:43 | 0 | 1 | I suspect it's very common for deployments to have a wide variety of client libraries (different versions/languages) connecting to a given cluster. It would be handy to have a way to identify clients via a clientID (akin to HTTP's User-Agent header). This could be implemented in ZooKeeperServer#processConnectRequest [1] and be fully backwards compatible. The clientID could then be kept with the corresponding ServerCnxn instance and be used for better logging (or stats expose through 4-letter commands). The corresponding client side change would be to expose API to set the clientID on each connection handler (and by default it could be something like "zk java $version", "zk c $version", etc). Thoughts? [1] https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/server/ZooKeeperServer.java#L797 |
352722 | No Perforce job exists for this issue. | 0 | 353009 | 6 years, 24 weeks, 1 day ago | 0|i1ot9r: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1787 | Add support for enabling local session in rolling upgrade |
Bug | Patch Available | Minor | Unresolved | Raúl Gutiérrez Segalés | Thawan Kooburat | Thawan Kooburat | 09/Oct/13 14:31 | 23/Aug/14 11:43 | 3.5.0 | server | 0 | 5 | ZOOKEEPER-1147 | Currently, local session need to be enable by stopping the entire ensemble. If a rolling upgrade is used, all write request from a local session will fail with session move until the local session is enabled on the leader. | 352703 | No Perforce job exists for this issue. | 1 | 352990 | 5 years, 30 weeks, 5 days ago | 0|i1ot5r: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1786 | ZooKeeper data model documentation is incorrect |
Bug | Closed | Minor | Fixed | Niraj Tolia | Niraj Tolia | Niraj Tolia | 09/Oct/13 04:10 | 13/Mar/14 14:16 | 15/Nov/13 13:07 | 3.4.6 | 3.4.6, 3.5.0 | documentation | 0 | 4 | When I look at https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkDataModel, I see two things that seem wrong in terms of restricted characters: * \uXFFFE - \uXFFFF (where X is a digit 1 - E) * \uF0000 - \uFFFFF These definitions are invalid characters in Java and aren't reflected in PathUtils either (or PathUtilsTest). In fact the code in PathUtils states: {code:borderStyle=solid} } else if (c > '\u0000' && c <= '\u001f' || c >= '\u007f' && c <= '\u009F' || c >= '\ud800' && c <= '\uf8ff' || c >= '\ufff0' && c <= '\uffff') { reason = "invalid charater @" + i; break; } {code} Unless I am missing something, this simple patch should fix the documentation problem: {code} Index: src/docs/src/documentation/content/xdocs/zookeeperProgrammers.xml =================================================================== --- src/docs/src/documentation/content/xdocs/zookeeperProgrammers.xml (revision 1530514) +++ src/docs/src/documentation/content/xdocs/zookeeperProgrammers.xml (working copy) @@ -139,8 +139,7 @@ <listitem> <para>The following characters are not allowed: \ud800 - uF8FF, - \uFFF0 - uFFFF, \uXFFFE - \uXFFFF (where X is a digit 1 - E), \uF0000 - - \uFFFFF.</para> + \uFFF0 - uFFFF.</para> </listitem> <listitem> {code} |
352602 | No Perforce job exists for this issue. | 1 | 352889 | 6 years, 2 weeks ago | 0|i1osjb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1785 | Small fix in zkServer.sh to support new configuration format |
Bug | Resolved | Minor | Not A Problem | Alexander Shraer | Alexander Shraer | Alexander Shraer | 08/Oct/13 20:22 | 29/Mar/14 18:48 | 29/Mar/14 18:48 | 3.5.0 | 3.5.0 | scripts | 0 | 2 | ZOOKEEPER-1783 | The problem can be reproduced by running a server with the following type of config file: dataDir=/Users/shralex/zookeeper-test/zookeeper1 syncLimit=2 initLimit=5 tickTime=2000 server.1=localhost:2721:2731:participant;2791 server.2=localhost:2722:2732:participant;2792 and then trying to do "zkServer.sh status" Here I specified the servers using the new config format but still used the static config file and didn't include the "clientPort" key. zkServer.sh already supports the new configuration format, but expects server spec to appear in the dynamic config file if it uses the new format. So in the example above it will not find the client port. The current logic for executing something like 'zkServer.sh status' is: 1. Look for clientPort keyword in the static config file 2. Look for the client port in the server spec in the dynamic config file The attached patch adds an intermediate step: 1'. Look for the client port in the server spec in the static config file |
352553 | No Perforce job exists for this issue. | 1 | 352840 | 5 years, 51 weeks, 5 days ago | 0|i1os8f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1784 | Logic to process INFORMANDACTIVATE packets in syncWithLeader seems bogus |
Bug | Resolved | Major | Fixed | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | Raúl Gutiérrez Segalés | 08/Oct/13 15:18 | 07/Apr/15 17:15 | 24/Jan/15 19:23 | 3.5.0 | 3.5.1 | 0 | 6 | If you look at Learner#syncWithLeader: {noformat} while (self.isRunning()) { readPacket(qp); switch(qp.getType()) { ....... case Leader.INFORM: case Leader.INFORMANDACTIVATE: PacketInFlight packet = new PacketInFlight(); packet.hdr = new TxnHeader(); if (qp.getType() == Leader.COMMITANDACTIVATE) { {noformat} I guess "qp.getType() == Leader.COMMITANDACTIVATE" is a typo that should read "qp.getType() == Leader.INFORMANDACTIVATE". Assigning to Alexander for now since this is part of ZOOKEEPER-107. |
352492 | No Perforce job exists for this issue. | 2 | 352779 | 4 years, 50 weeks, 2 days ago | Committed to trunk. Thanks Raul, and sorry that it took so long to commit. | 0|i1oruv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1783 | Distinguish initial configuration from first established configuration |
Bug | Resolved | Major | Fixed | Alexander Shraer | Alexander Shraer | Alexander Shraer | 07/Oct/13 14:01 | 29/Mar/14 18:19 | 07/Nov/13 12:07 | 3.5.0 | 3.5.0 | quorum, server | 0 | 4 | ZOOKEEPER-1807, ZOOKEEPER-1785, ZOOKEEPER-1691 | We need a way to distinguish an initial config of a server and an initial config of a running ensemble (before any reconfigs happen). Currently both have version 0. The version of a config increases with each reconfiguration, so the problem is just with the initial config. |
352283 | No Perforce job exists for this issue. | 10 | 352571 | 6 years, 20 weeks ago | 0|i1oqkn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1782 | zookeeper.superUser is not as super as superDigest |
Bug | Resolved | Major | Fixed | Robert Joseph Evans | Robert Joseph Evans | Robert Joseph Evans | 04/Oct/13 14:24 | 23/Jun/17 13:45 | 23/Jun/17 12:40 | 3.4.5 | 3.5.4, 3.6.0 | 1 | 7 | The zookeeper.superUser system property does not fully grant super user privileges, like zookeeper.DigestAuthenticationProvider.superDigest does. zookeeper.superUser only has as many privileges as the sasl ACLs on the znode being accessed. This means that if a znode only has digest ACLs zookeeper.superUser is ignored. Or if a znode has a single sasl ACL that only has read privileges zookeeper.superUser only has read privileges. The reason for this is that SASLAuthenticationProvider implements the superUser check in the matches method, instead of having the super user include a new Id("super","") as Digest does. |
352012 | No Perforce job exists for this issue. | 2 | 352300 | 2 years, 38 weeks, 6 days ago | 0|i1oown: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1781 | ZooKeeper Server fails if snapCount is set to 1 |
Bug | Closed | Minor | Fixed | Takashi Ohnishi | Takashi Ohnishi | Takashi Ohnishi | 03/Oct/13 21:19 | 13/Mar/14 14:17 | 07/Oct/13 19:38 | 3.4.5 | 3.4.6, 3.5.0 | quorum | 0 | 5 | If snapCount is set to 1, ZooKeeper Server can start but it fails with the below error: 2013-10-02 18:09:07,600 [myid:1] - ERROR [SyncThread:1:SyncRequestProcessor@151] - Severe unrecoverable error, exiting java.lang.IllegalArgumentException: n must be positive at java.util.Random.nextInt(Random.java:300) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:93) In source code, it maybe be supposed that snapCount must be 2 or more: {code:title=org.apache.zookeeper.server.SyncRequestProcessor.java|borderStyle=solid} 91 // we do this in an attempt to ensure that not all ofthe servers 92 // in the ensemble take a snapshot at the same time 93 int randRoll = r.nextInt(snapCount/2); {code} I think this supposition is not bad because snapCount = 1 is not realistic setting... But, it may be better to mention this restriction in documentation or add a validation in the source code. |
351893 | No Perforce job exists for this issue. | 2 | 352181 | 6 years, 2 weeks ago |
Reviewed
|
0|i1oo6n: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1780 | add a FAQ for why myid file is required |
Improvement | Open | Minor | Unresolved | Unassigned | Patrick D. Hunt | Patrick D. Hunt | 02/Oct/13 14:56 | 02/Oct/13 14:56 | 0 | 1 | This comes up every so often, would be good for us to document on the FAQ. https://cwiki.apache.org/confluence/display/ZOOKEEPER/FAQ here's one such discussion: http://markmail.org/message/cvzz3tq3gievicqe |
351629 | No Perforce job exists for this issue. | 0 | 351917 | 6 years, 25 weeks, 1 day ago | 0|i1omk7: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1779 | ReconfigTest littering the source root with test files |
Bug | Resolved | Critical | Fixed | Abhiraj Butala | Patrick D. Hunt | Patrick D. Hunt | 02/Oct/13 14:42 | 06/Mar/14 06:09 | 05/Mar/14 16:55 | 3.5.0 | 3.5.0 | tests | 0 | 4 | After running the ReconfigTest I saw a number of the following files in the source root (not in a subdir of the build directory as would be expected) zoo_replicated1.dynamic (saw files zoo_replicated{1-9}) |
351625 | No Perforce job exists for this issue. | 1 | 351913 | 6 years, 3 weeks ago | 0|i1omjb: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1778 | Use static final Logger objects |
Improvement | Resolved | Minor | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 02/Oct/13 13:36 | 05/Oct/13 07:10 | 04/Oct/13 18:58 | 3.5.0 | 0 | 5 | Logger is not declared as 'private static final' in few classes | 351617 | No Perforce job exists for this issue. | 1 | 351906 | 6 years, 24 weeks, 5 days ago | 0|i1omhr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1777 | Missing ephemeral nodes in one of the members of the ensemble |
Bug | Open | Major | Unresolved | Germán Blanco | Germán Blanco | Germán Blanco | 02/Oct/13 06:09 | 14/Dec/19 06:09 | 3.4.5 | 3.7.0 | quorum | 0 | 7 | ZOOKEEPER-1794 | ZOOKEEPER-1449, ZOOKEEPER-832, ZOOKEEPER-866, ZOOKEEPER-1413 | Linux, Java 1.7 | In a 3-servers ensemble, one of the followers doesn't see part of the ephemeral nodes that are present in the leader and the other follower. The 8 missing nodes in "the follower that is not ok" were created in the end of epoch 1, the ensemble is running in epoch 2. |
0% | 1209600 | 1209600 | 351470 | No Perforce job exists for this issue. | 6 | 351759 | 6 years, 22 weeks ago | 0|i1oll3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1776 | Ephemeral nodes not present in one of the members of the ensemble |
Bug | Closed | Major | Invalid | Germán Blanco | Germán Blanco | Germán Blanco | 02/Oct/13 06:07 | 13/Mar/14 14:17 | 02/Oct/13 06:51 | 3.4.5 | 3.4.6, 3.5.0 | quorum | 0 | 2 | Linux, Java 1.7 | In a 3-servers ensemble, one of the followers doesn't see part of the ephemeral nodes that are present in the leader and the other follower. The 8 missing nodes in "the follower that is not ok" were created in the end of epoch 1, the ensemble is running in epoch 2. |
351469 | No Perforce job exists for this issue. | 0 | 351758 | 6 years, 2 weeks ago | 0|i1olkv: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1775 | Ephemeral nodes not present in one of the members of the ensemble |
Bug | Closed | Major | Invalid | Germán Blanco | Germán Blanco | Germán Blanco | 02/Oct/13 06:05 | 13/Mar/14 14:17 | 02/Oct/13 06:52 | 3.4.5 | 3.4.6, 3.5.0 | quorum | 0 | 2 | Linux, Java 1.7 | In a 3-servers ensemble, one of the followers doesn't see part of the ephemeral nodes that are present in the leader and the other follower. The 8 missing nodes in "the follower that is not ok" were created in the end of epoch 1, the ensemble is running in epoch 2. |
351467 | No Perforce job exists for this issue. | 0 | 351756 | 6 years, 2 weeks ago | 0|i1olkf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1774 | QuorumPeerMainTest fails consistently with "complains about host" assertion failure |
Bug | Closed | Blocker | Fixed | Patrick D. Hunt | Patrick D. Hunt | Patrick D. Hunt | 01/Oct/13 19:14 | 13/Mar/14 14:17 | 08/Oct/13 01:28 | 3.4.6 | 3.4.6, 3.5.0 | quorum, tests | 0 | 6 | Ubuntu 13.04 Linux version 3.8.0-30-generic (buildd@akateko) (gcc version 4.7.3 (Ubuntu/Linaro 4.7.3-1ubuntu1) ) #44-Ubuntu SMP Thu Aug 22 20:54:42 UTC 2013 java -version java version "1.6.0_45" Java(TM) SE Runtime Environment (build 1.6.0_45-b06) Java HotSpot(TM) Server VM (build 20.45-b01, mixed mode) |
QuorumPeerMainTest fails consistently with "complains about host" assertion failure. {noformat} 2013-10-01 16:09:17,962 [myid:] - INFO [main:JUnit4ZKTestRunner$LoggedInvokeMethod@54] - TEST METHOD FAILED testBadPeerAddressInQuorum java.lang.AssertionError: complains about host at org.junit.Assert.fail(Assert.java:91) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.zookeeper.server.quorum.QuorumPeerMainTest.testBadPeerAddressInQuorum(QuorumPeerMainTest.java:434) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 2013-10-01 16:09:17,963 [myid:] - INFO [main:ZKTestCase$1@65] - FAILED testBadPeerAddressInQuorum java.lang.AssertionError: complains about host at org.junit.Assert.fail(Assert.java:91) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.zookeeper.server.quorum.QuorumPeerMainTest.testBadPeerAddressInQuorum(QuorumPeerMainTest.java:434) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.apache.zookeeper.JUnit4ZKTestRunner$LoggedInvokeMethod.evaluate(JUnit4ZKTestRunner.java:52) at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) {noformat} |
351401 | No Perforce job exists for this issue. | 2 | 351691 | 6 years, 2 weeks ago |
Reviewed
|
0|i1ol5z: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1773 | incorrect reference to jline version/lib in docs |
Bug | Resolved | Major | Fixed | Manikumar | Patrick D. Hunt | Patrick D. Hunt | 01/Oct/13 17:20 | 04/Oct/13 07:07 | 03/Oct/13 12:57 | 3.5.0 | 3.5.0 | documentation | 0 | 3 | ZOOKEEPER-1718 | The docs refer to an old version of jline {noformat} src/docs/src/documentation/content/xdocs/zookeeperAdmin.xml 227: <para><computeroutput>$ java -cp zookeeper.jar:lib/slf4j-api-1.6.1.jar:lib/slf4j-log4j12-1.6.1.jar:lib/log4j-1.2.15.jar:conf:src/java/lib/jline-0.9.94.jar \ src/docs/src/documentation/content/xdocs/zookeeperQuotas.xml 46: <para><computeroutput>$java -cp zookeeper.jar:src/java/lib/log4j-1.2.15.jar/conf:src/java/lib/jline-0.9.94.jar \ {noformat} |
351377 | No Perforce job exists for this issue. | 1 | 351669 | 6 years, 24 weeks, 6 days ago |
Reviewed
|
0|i1ol13: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1772 | config.guess downgraded from timestamp='2012-02-10' in 3.3.x to timestamp='2005-07-08' in 3.4.x |
Bug | Resolved | Major | Implemented | Unassigned | Chris Hall | Chris Hall | 01/Oct/13 09:43 | 01/Oct/13 17:57 | 01/Oct/13 17:57 | 3.4.5 | c client | 0 | 1 | Was this intentional? If so, why? | 351284 | No Perforce job exists for this issue. | 0 | 351576 | 6 years, 25 weeks, 2 days ago | 0|i1okgf: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1771 | ZooInspector authentication |
Improvement | Closed | Minor | Fixed | Benjamin Jaton | Benjamin Jaton | Benjamin Jaton | 01/Oct/13 03:04 | 13/Mar/14 14:16 | 08/Oct/13 01:40 | 3.4.5, 3.5.0 | 3.4.6, 3.5.0 | contrib | 0 | 3 | ZooInspector doesn't support authentication, so it always connects as anonymous to the ensemble. It would be nice to be able to configure the authentication scheme+data in order to browse the nodes that have ACLs set. |
351226 | No Perforce job exists for this issue. | 3 | 351518 | 6 years, 2 weeks ago | Added authentication for ZooInspector. |
Reviewed
|
0|i1ok3j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1770 | NullPointerException in SnapshotFormatter |
Bug | Closed | Minor | Fixed | Germán Blanco | Germán Blanco | Germán Blanco | 30/Sep/13 04:00 | 13/Mar/14 14:16 | 01/Oct/13 19:46 | 3.4.5 | 3.4.6, 3.5.0 | 0 | 4 | Windos, Java 1.7 | SnapshotFormatter fails with a NullPointerException when parsing one snapshot (with "null" data in one Znode): Exception in thread "main" java.lang.NullPointerException at org.apache.zookeeper.server.SnapshotFormatter.printZnode(SnapshotFormatter.java:90) at org.apache.zookeeper.server.SnapshotFormatter.printZnode(SnapshotFormatter.java:95) at org.apache.zookeeper.server.SnapshotFormatter.printZnode(SnapshotFormatter.java:95) at org.apache.zookeeper.server.SnapshotFormatter.printZnode(SnapshotFormatter.java:95) at org.apache.zookeeper.server.SnapshotFormatter.printZnodeDetails(SnapshotFormatter.java:79) at org.apache.zookeeper.server.SnapshotFormatter.printDetails(SnapshotFormatter.java:71) at org.apache.zookeeper.server.SnapshotFormatter.run(SnapshotFormatter.java:67) at org.apache.zookeeper.server.SnapshotFormatter.main(SnapshotFormatter.java:51) |
351022 | No Perforce job exists for this issue. | 3 | 351314 | 6 years, 2 weeks ago |
Reviewed
|
0|i1oiu7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1769 | ZooInspector can't display node data/metadata/ACLs |
Bug | Resolved | Minor | Fixed | Benjamin Jaton | Benjamin Jaton | Benjamin Jaton | 29/Sep/13 13:12 | 01/Oct/13 07:11 | 01/Oct/13 01:56 | 3.5.0 | 3.5.0 | contrib | 0 | 4 | Ubuntu | There seem to be a few bugs in the trunk that prevent ZooInspector to load the node viewers ( the 3 tabs in the main windows when you select a ZK node in the tree don't show up any more ). Apparently it has been introduced 2 years ago after a refactoring about icons and another about partially fixing a typo ("veiwer" -> "viewer"). Note: the bug is only in trunk, 3.4 is fine. |
patch | 350979 | No Perforce job exists for this issue. | 1 | 351270 | 6 years, 25 weeks, 2 days ago | 0|i1oikf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1768 | Cluster fails election loop until the device is full |
Bug | Open | Major | Unresolved | Flavio Paiva Junqueira | yuxin.yan | yuxin.yan | 27/Sep/13 05:19 | 05/Feb/20 07:16 | 3.4.5 | 3.7.0, 3.5.8 | leaderElection | 0 | 2 | Hi, I have a five nodes cluster versioned 3.4.5 and now i find one node is offline. Firstly i restart the node but i find that "Error contacting service. It is probably not running." and i find that the node always elect the leader and always sync the snapshot logs and the device will be full every ten mins. so could someone help me? i will put the log and zoo.cfg in the attachment. Thanks all. yyx, |
350723 | No Perforce job exists for this issue. | 2 | 351014 | 6 years, 19 weeks, 1 day ago | 0|i1ogzr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1767 | DEBUG log statements use SLF4j {} format to improve performance |
Improvement | Patch Available | Minor | Unresolved | Jackie Chang | Jackie Chang | Jackie Chang | 26/Sep/13 21:28 | 22/Feb/15 17:31 | 3.4.5 | 0 | 3 | As a coordination service, ZooKeeper is meant to be high performant. DEBUG logs are not normally viewed (see Doug Cutting's comment in HADOOP-953). I propose to add a conditional check to each DEBUG log stmt to improve performance. Firstly, previous issues added a condition check before a DEBUG log stmt. For example, in ZOOKEEPER-558: {code} - LOG.debug("Got notification sessionid:0x" - + Long.toHexString(sessionId)); + if (LOG.isDebugEnabled()) { + LOG.debug("Got notification sessionid:0x" + + Long.toHexString(sessionId)); + } {code} And in ZOOKEEPER-259: {code} - LOG - .debug("Got ping sessionid:0x" - + Long.toHexString(sessionId)); + if (LOG.isDebugEnabled()) { + LOG.debug("Got ping response for sessionid:0x" + + Long.toHexString(sessionId) + + " after " + + ((System.nanoTime() - lastPingSentNs) / 1000000) + + "ms"); + } {code} Secondly, its underlying cause is that: * *If a DEBUG log stmt is unguarded, the string operations (most likely concatenations) are actually conducted even though the log event doesn't happen b/c a level less verbose than DEBUG is configured.* * Adding the conditional check creates another basic block in Java bytecode. And instructions inside that basicblock is executed only when execution path goes into it. But this only happens when the path passes the test. Detailed explanations are in a StackOverflow thread: http://stackoverflow.com/questions/10428447/log-debug-enabled-check-in-java An alternative solution is to move from log4j to slf4j and use the "{}" format. A workaround now is to add all conditional checks. The additional overhead is marginal (possibly compare-and-jump instruction(s) in Java bytecode) compared to saved computation of expensive string creations and concatenations. Its counterpart in Hadoop has been accepted: HADOOP-6884. |
350688 | No Perforce job exists for this issue. | 2 | 350979 | 5 years, 4 weeks, 4 days ago | 0|i1ogrz: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1766 | Consistent log severity level guards and statements |
Improvement | Resolved | Minor | Fixed | Jackie Chang | Jackie Chang | Jackie Chang | 26/Sep/13 00:25 | 12/Feb/15 04:51 | 04/Oct/13 18:22 | 3.4.5 | 3.5.0 | 0 | 3 | A log statement should be guarded by its matching severity level. A log statement like if (LOG.isTraceEnabled()) { LOG.info("Session closing: 0x" + Long.toHexString(sessionId)); doesn't make much sense because the log message is only printed out when TRACE-level is enabled. This inconsistency was possibly introduced when developers demoted the original log statement from INFO but forgot to change its corresponding log severity level. |
350463 | No Perforce job exists for this issue. | 1 | 350756 | 6 years, 24 weeks, 5 days ago | 0|i1ofen: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1765 | Update code conventions link on "How to contribute" page |
Bug | Closed | Trivial | Fixed | Patrick D. Hunt | Flavio Paiva Junqueira | Flavio Paiva Junqueira | 25/Sep/13 18:21 | 13/Mar/14 14:17 | 01/Oct/13 20:03 | 3.4.6, 3.5.0 | documentation | 0 | 2 | 350420 | No Perforce job exists for this issue. | 0 | 350713 | 6 years, 2 weeks ago |
Reviewed
|
0|i1of53: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1764 | ZooKeeper attempts at SASL eventhough it shouldn't |
Bug | Closed | Major | Implemented | Unassigned | Kuba Skopal | Kuba Skopal | 25/Sep/13 05:51 | 13/Mar/14 14:17 | 25/Sep/13 13:05 | 3.4.4 | 3.4.6 | java client | 0 | 3 | ZOOKEEPER-1455, ZOOKEEPER-1437, ZOOKEEPER-1657 | We are using a proprietary SASL solution, but we don't want to use it with ZooKeeper. Unfortunately it seems, that there is no way to disable SASL for ZooKeeper as the code only checks for presence of "java.security.auth.login.config" system property to determine whether SASL should be used or not. For us it means, that ZooKeeper client just shuts down after SASL is initialized. What happens: 1) System.getProperty("java.security.auth.login.config") is initially null 2) ZooKeeper is initialized and used 3) Our JAAS/SASL component is initialized 4) System.getProperty("java.security.auth.login.config") is not null anymore 5) ZooKeeperSaslClient.clientTunneledAuthenticationInProgress() suddenly picks up the new property and starts returning true 6) ClientCnxnSocketNIO.findSendablePacket() suddenly stops returning any packets since clientTunneledAuthenticationInProgress is always true The communication is halted and eventually times out. |
350277 | No Perforce job exists for this issue. | 0 | 350570 | 6 years, 2 weeks ago | Solved by ZOOKEEPER-1657 | 0|i1oe9j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1763 | Upgrade the netty version |
Improvement | Resolved | Minor | Duplicate | Nicolas Liochon | Nicolas Liochon | Nicolas Liochon | 23/Sep/13 08:14 | 11/Oct/13 17:32 | 11/Oct/13 17:32 | 3.4.5 | 0 | 3 | ZOOKEEPER-1715, ZOOKEEPER-1681 | 2 years ago (in https://github.com/netty/netty/issues/103), Netty changed their group-id from org.jboss.netty to io.netty. ZooKeeper is still on the 3.2.5, so applications using 3.3 cannot use the maven "dependencyManagement" feature, as the group id differ. HBase & Hadoop 2 are on the branch 3.3+, with the new group id. Note that the netty 4 changes the package name as well. That's not the case for Netty 3.3+. |
349870 | No Perforce job exists for this issue. | 1 | 350168 | 6 years, 23 weeks, 6 days ago | 0|i1obs7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1762 | ZOOKEEPER-1760 Implement 'check' version cli command |
Sub-task | Resolved | Major | Won't Fix | Unassigned | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 22/Sep/13 20:01 | 25/Sep/13 05:43 | 25/Sep/13 05:43 | 3.5.0 | java client | 0 | 1 | 349777 | No Perforce job exists for this issue. | 1 | 350075 | 6 years, 26 weeks, 1 day ago | 0|i1ob7j: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1761 | ZOOKEEPER-1760 Expose 'check' version api in ZooKeeper client |
Sub-task | Resolved | Major | Won't Fix | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 22/Sep/13 19:05 | 25/Sep/13 05:42 | 25/Sep/13 05:42 | 3.5.0 | java client | 0 | 3 | Implement ZooKeeper#check api | 349772 | No Perforce job exists for this issue. | 1 | 350070 | 6 years, 26 weeks, 1 day ago | 0|i1ob6f: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1760 | Provide an interface for check version of a node |
New Feature | Resolved | Major | Won't Fix | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 22/Sep/13 19:02 | 25/Sep/13 05:44 | 25/Sep/13 04:26 | 3.5.0 | java client | 0 | 6 | ZOOKEEPER-1761, ZOOKEEPER-1762 | The idea of this JIRA is to discuss the check version interface which is used to see the existence of a node for the specified version. Presently only multi transaction api has this interface, this umbrella JIRA is to make 'check version' api part of ZooKeeper# main apis and cli command. |
349771 | No Perforce job exists for this issue. | 0 | 350069 | 6 years, 26 weeks, 1 day ago | 0|i1ob67: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1759 | Adding ability to allow READ operations for authenticated users, versus keeping ACLs wide open for READ |
Improvement | Resolved | Major | Fixed | Yuliya Feldman | Yuliya Feldman | Yuliya Feldman | 18/Sep/13 02:28 | 12/Feb/15 08:44 | 29/Sep/13 19:47 | 3.4.5 | 3.5.0 | server | 0 | 6 | Java, SASL authentication, security | Today when using SASLAuthenticationProvider to authenticate Zookeeper Clients access to the data based on ACLS set on znodes there is no other choice but to set READ ACLs to be "world", "anyone" with the way how {code:java} public boolean matches(String id,String aclExpr) {code} is currently implemented. It means that any unauthenticated user can read the data when application needs to make sure that not only creator of a znode can read the content. Proposal is to introduce new property: "zookeeper.readUser" that if incoming id matches to the value of that property it will be allowed to proceed in "match" method. So creator of a znode instead of {code:java} ACL acl1 = new ACL(Perms.ADMIN | Perms.CREATE | Perms.WRITE | Perms.DELETE, Ids.AUTH_IDS); ACL acl2 = new ACL(Perms.READ, Ids.ANYONE_ID_UNSAFE); {code} will need to do {code:java} ACL acl1 = new ACL(Perms.ADMIN | Perms.CREATE | Perms.WRITE | Perms.DELETE, Ids.AUTH_IDS); ACL acl2 = new ACL(Perms.READ, new Id("sasl", "anyone")); {code} Assuming that value of "zookeeper.readUser" property was "anyone". This way at least READ access on corresponding znode has to be authenticated. |
349122 | No Perforce job exists for this issue. | 7 | 349420 | 6 years, 25 weeks, 4 days ago | 0|i1o75z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1758 | Add documentation for zookeeper.observer.syncEnabled flag |
Improvement | Closed | Minor | Fixed | Thawan Kooburat | Thawan Kooburat | Thawan Kooburat | 13/Sep/13 20:43 | 13/Mar/14 14:17 | 30/Sep/13 16:55 | 3.4.6, 3.5.0 | 0 | 3 | ZOOKEEPER-1552 | 348535 | No Perforce job exists for this issue. | 2 | 348832 | 6 years, 2 weeks ago | 0|i1o3jj: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1757 | Adler32 may not be sufficient to protect against data corruption |
Bug | Open | Minor | Unresolved | Unassigned | Thawan Kooburat | Thawan Kooburat | 12/Sep/13 19:57 | 13/Sep/13 22:04 | server | 0 | 5 | Linux. Oracle JDK6/7 | I was investigating data inconsistency bug in our internal branch. One possible area is snapshot/txnlog corruption. So I wrote a more robust corruption test and found that it is easy to break our checksum algorithm which is Adler32. When this happen, it is more likely that corrupted data will fail other sanity check during deserialization phase, but it is still scary that it can pass the checksum. |
348317 | No Perforce job exists for this issue. | 2 | 348613 | 6 years, 27 weeks, 5 days ago | 0|i1o273: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1756 | zookeeper_interest() in C client can return a timeval of 0 |
Bug | Closed | Major | Fixed | Eric Lindvall | Eric Lindvall | Eric Lindvall | 08/Sep/13 00:45 | 13/Mar/14 14:17 | 16/Dec/13 16:50 | 3.3.4, 3.4.5 | 3.4.6, 3.5.0 | c client | 0 | 5 | If the client is connected to a zookeeper server that has hung while there is an outstanding request, zookeeper_interest() can return a timeval of 0 because send_to will be negative. | 347409 | No Perforce job exists for this issue. | 5 | 347708 | 6 years, 2 weeks ago | 0|i1nwmf: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1755 | Concurrent operations of four letter 'dump' ephemeral command and killSession causing NPE |
Bug | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 07/Sep/13 14:12 | 13/Mar/14 14:17 | 18/Feb/14 20:38 | 3.4.5 | 3.4.6, 3.5.0 | server | 0 | 9 | Potential problem occurs, when executing four letter 'dump' command and at the meantime zkserver has triggered session closure and removing the related information from the DataTree. Please see the exception: {code} java.lang.NullPointerException at org.apache.zookeeper.server.DataTree.dumpEphemerals(DataTree.java:1278) at org.apache.zookeeper.server.DataTreeTest$1.run(DataTreeTest.java:82) {code} |
347393 | No Perforce job exists for this issue. | 3 | 347692 | 6 years, 2 weeks ago | 0|i1nwiv: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1754 | Read-only server allows to create znode |
Bug | Closed | Critical | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 06/Sep/13 11:42 | 13/Mar/14 14:17 | 17/Sep/13 19:14 | 3.5.0 | 3.4.6, 3.5.0 | server | 0 | 7 | 347281 | No Perforce job exists for this issue. | 4 | 347580 | 6 years, 2 weeks ago | 0|i1nvu7: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1753 | ClientCnxn is not properly releasing the resources, which are used to ping RwServer |
Bug | Closed | Major | Fixed | Rakesh Radhakrishnan | Rakesh Radhakrishnan | Rakesh Radhakrishnan | 05/Sep/13 10:37 | 13/Mar/14 14:17 | 18/Sep/13 09:15 | 3.4.6, 3.5.0 | java client | 0 | 7 | While pinging to the RwServer, ClientCnxn is opening a socket and using BufferedReader. These are not properly closed in finally block and could cause leaks on exceptional cases. ClientCnxn#pingRwServer() {code} try { Socket sock = new Socket(addr.getHostName(), addr.getPort()); BufferedReader br = new BufferedReader( new InputStreamReader(sock.getInputStream())); ...... sock.close(); br.close(); } catch (ConnectException e) { // ignore, this just means server is not up } catch (IOException e) { // some unexpected error, warn about it LOG.warn("Exception while seeking for r/w server " + e.getMessage(), e); } {code} |
347054 | No Perforce job exists for this issue. | 3 | 347353 | 6 years, 2 weeks ago | 0|i1nufr: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1752 | Download area contains obsolete releases |
Bug | Resolved | Major | Fixed | Patrick D. Hunt | Sebb | Sebb | 01/Sep/13 19:55 | 03/Mar/19 18:33 | 03/Mar/19 12:50 | 0 | 2 | The download area under http://www.apache.org/dist/zookeeper/ contains several superseded releases. It is important that only the latest release of each currently maintained product line is stored on the main ASF mirrors. Links to older releases can be provided on download pages, but the links should be to the ASF archive server http://archive.apache.org/dist/zookeeper/ See http://www.apache.org/dev/release.html#when-to-archive |
346456 | No Perforce job exists for this issue. | 0 | 346757 | 1 year, 2 weeks, 4 days ago | 0|i1nqrj: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1751 | ClientCnxn#run could miss the second ping or connection get dropped before a ping |
Bug | Closed | Major | Fixed | Jeffrey Zhong | Jeffrey Zhong | Jeffrey Zhong | 30/Aug/13 20:56 | 13/Mar/14 14:17 | 17/Sep/13 22:08 | 3.3.5, 3.4.5 | 3.4.6, 3.5.0 | 0 | 7 | We could throw SessionTimeoutException exception even when timeToNextPing may also be negative depending on the time when the following line is executed by the thread because we check time out before sending a ping. {code} to = readTimeout - clientCnxnSocket.getIdleRecv(); {code} In addition, we only ping twice no matter how long the session time out value is. For example, we set session time out = 60mins then we only try ping twice in 40mins window. Therefore, the connection could be dropped by OS after idle time out. The issue is causing randomly "connection loss" or "session expired" issues in client side which is bad for applications like HBase. |
346375 | No Perforce job exists for this issue. | 1 | 346676 | 6 years, 2 weeks ago | 0|i1nq9j: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1750 | Race condition producing NPE in NIOServerCnxn.toString |
Bug | Closed | Minor | Fixed | Rakesh Radhakrishnan | Helen Hastings | Helen Hastings | 30/Aug/13 18:57 | 13/Mar/14 14:16 | 04/Oct/13 19:16 | 3.5.0 | 3.4.6, 3.5.0 | server | 0 | 9 | The socket is closed and the variable "sock" is set to null for normal reasons, but the toString method is called before "sock" can be set again, producing a NullPointerException. Stack trace: 2013-08-29 01:49:19,991 6277 [CommitProcWorkThread-3] WARN org.apache.zookeeper.server.WorkerService - Unexpected exception java.lang.NullPointerException at org.apach.zookeeper.server.NIOServerCnxn.toString(NIOServerCnxn.java:961) at java.lang.String.valueOf(String.java:2854) at java.lang.StringBuilder.append(StringBuilder.java:128) at org.apache.zookeeper.server.NIOServerCnxn.process(NIOServerCnxn.java:1104) at org.apache.zookeeper.server.WatchManager.triggerWatch(WatchManager.java:120) at org.apache.zookeeper.server.WatchManager.triggerWatch(WatchManager.java:92) at org.apache.zookeeper.server.DataTree.createNode(DataTree.java:544) at org.apache.zookeeper.server.DataTree.processTxn(DataTree.java:805) at org.apache.zookeeper.server.ZKDatabase.processTxn(ZKDatabase.java:319) at org.apache.zookeeper.server.ZooKeeperServer.processTxn(ZooKeeperServer.java:967) at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:115) at org.apache.zookeeper.server.quorum.Leader$ToBeAppliedRequestProcessor.processRequest(Leader.java:859) at org.apache.zookeeper.server.quorum.CommitProcessor$CommitWorkRequest.doWork(CommitProcessor.java:271) at org.apache.zookeeper.server.WorkerService$ScheduledWorkRequest.run(WorkerService.java:152) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) |
346355 | No Perforce job exists for this issue. | 4 | 346656 | 6 years, 2 weeks ago | 0|i1nq53: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1749 | Login outside of Zookeeper client |
Improvement | Open | Major | Unresolved | Kai Zheng | Kai Zheng | Kai Zheng | 30/Aug/13 18:21 | 05/Feb/20 07:16 | 3.7.0, 3.5.8 | java client | 0 | 4 | HADOOP-9938 | This proposes to allow Zookeeper client to reuse login credentials and subject from outside, avoiding redundant logins and related configurations for services that utilizes Zookeeper. | 346350 | No Perforce job exists for this issue. | 2 | 346651 | 6 years, 4 days ago | 0|i1nq3z: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1748 | TCP keepalive for leader election connections |
Improvement | Resolved | Minor | Fixed | Ben Sherman | Antal Sasvári | Antal Sasvári | 29/Aug/13 10:57 | 22/Oct/19 18:00 | 08/Jun/17 11:42 | 3.4.5, 3.5.0 | 3.4.11, 3.5.4, 3.6.0 | leaderElection | 8 | 25 | Linux, Java 1.7 | In our system we encountered the following problem: If the system is stable, and there is no leader election, the leader election port connections are open for very long time without any packets being sent on them. Some network elements silently drop the established TCP connection after a timeout if there are no packets being sent on it. In this case the ZK servers will not notice the connection loss. This causes additional delay later when the next leader election is started, as the TCP connections are not alive any more. We would like to be able to enable TCP keepalive on the leader election sockets in order to prevent the connection timeout in some network elements due to connection inactivity. This could be controlled by adding a new config parameter called tcpKeepAlive in the ZooKeeper configuration file. It would be only applicable in case of algorithm 3 (TCP based fast leader election), having the default value false. If tcpKeepAlive is set to true, the TCP keepalive flag should be enabled for the leader election sockets in QuorumCnxManager.setSockOpts() by calling sock.setKeepAlive(true). We have tested this change successfully in our environment. Please comment whether you see any problem with this. If not, I am going to submit a patch. I've been told that e.g. Apache ActiveMQ also has a config option for similar purpose called transport.keepalive. |
346080 | No Perforce job exists for this issue. | 1 | 346381 | 21 weeks, 2 days ago | 0|i1nogn: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1747 | Zookeeper server fails to start if transaction log file is corrupted |
Bug | Resolved | Major | Duplicate | Unassigned | Sergey Maslyakov | Sergey Maslyakov | 29/Aug/13 10:04 | 12/Sep/13 11:06 | 12/Sep/13 11:05 | 3.4.5 | server | 1 | 2 | Solaris10/x86, Java 1.6 | On multiple occasions when ZK was not able to write out a transaction log or a snapshot file, the consequent attempt to restart the server fails. Usually it happens when the underlying file system filled up; thus, preventing ZK server from writing out consistent data file. Upon start-up, the server reads in the snapshot and the transaction log. If the deserializer fails and throws an exception, server terminates. Please see the stack trace below. Server not coming up for whatever reason is often an undesirable condition. It would be nice to have an option to force-ignore parsing errors, especially, in the transaction log. A check sum on the data could be a possible solution to ensure the integrity and "parsability". Another robustness enhancement could be via proper handling of the condition when snapshot or transaction log cannot be completely written to disk. Basically, better handling of write errors. {noformat} 2013-08-28 12:05:30,732 ERROR [ZooKeeperServerMain] Unexpected exception, exiting abnormally java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:558) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:577) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:543) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:160) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:250) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:383) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:122) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:112) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:129) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) {noformat} |
346076 | No Perforce job exists for this issue. | 0 | 346377 | 6 years, 28 weeks ago | 0|i1nofr: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1746 | AsyncCallback.*Callback don't have any Javadoc |
Improvement | Resolved | Major | Fixed | Hongchao Deng | Tsuyoshi Ozawa | Tsuyoshi Ozawa | 28/Aug/13 15:18 | 26/Jun/14 07:19 | 25/Jun/14 13:09 | 3.4.6 | 3.4.7, 3.5.0 | documentation | 0 | 4 | AsyncCallback.*Callback don't have any Javadoc. This forces users to read source code or sample code to understand what their arguments stand for or how one is difference from the others. | newbie | 345927 | No Perforce job exists for this issue. | 2 | 346228 | 5 years, 39 weeks ago |
Reviewed
|
0|i1nnin: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1745 | Wrong Import-Package in the META-INF/MANIFEST.MF of zookeeper 3.4.5 bundle |
Bug | Resolved | Major | Fixed | Jean-Baptiste Onofré | Xilai Dai | Xilai Dai | 26/Aug/13 05:25 | 24/Apr/14 21:58 | 24/Apr/14 21:58 | 3.4.5 | 3.4.6, 3.5.0 | server | 0 | 6 | Java 7 | Import-Package: javax.management,org.apache.log4j,org.osgi.framework;v ersion="[1.4,2.0)",org.osgi.util.tracker;version="[1.1,2.0)" the "org.apache.log4j" should be replaced by "org.slf4j", because from the source codes, zookeeper server classes import org.slf4j.* for logging. currently will get: Caused by: java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory at org.apache.zookeeper.server.quorum.QuorumPeerConfig.<clinit>(QuorumPeerConfig.java:46) when try to create instance for some of its classes in OSGi container (e.g. apache karaf) |
345416 | No Perforce job exists for this issue. | 0 | 345717 | 5 years, 47 weeks, 6 days ago | 0|i1nkd3: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1744 | clientPortAddress breaks "zkServer.sh status" |
Bug | Closed | Critical | Fixed | Nick Ohanian | Nick Ohanian | Nick Ohanian | 22/Aug/13 16:23 | 13/Mar/14 14:17 | 24/Oct/13 01:12 | 3.4.5 | 3.4.6, 3.5.0 | scripts | 0 | 6 | When "clientPortAddress" is used in the config file (zoo.cfg), zkServer.sh's status command runs a grep command that matches both "clientPort" and "clientPortAddress". This creates an extra argument for FourLetterWordMain, which fails, so the status command incorrectly indicates that it couldn't connect to the server. Also, "localhost" is hardcoded as the target host for FourLetterWordMain. The "clientPortAddress" should be used if it is provided in the config file. |
345041 | No Perforce job exists for this issue. | 2 | 345342 | 6 years, 2 weeks ago |
Reviewed
|
0|i1ni27: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ZooKeeper | ZOOKEEPER-1743 | Chocolatey package for automating Zookeeper installation in Windows |
Improvement | Resolved | Minor | Won't Fix | Unassigned | Andrew Pennebaker | Andrew Pennebaker | 21/Aug/13 12:43 | 03/Mar/16 19:35 | 03/Mar/16 19:35 | build | 0 | 1 | ZOOKEEPER-1604 | Chocolatey (http://chocolatey.org/) Windows XP+ |
Ubuntu has "apt-get install zookeeper", simplifying installation in Linux. Similarly, Mac has "brew install zookeeper". Could we help out our Windows users by submitting a Chocolatey package? | 344812 | No Perforce job exists for this issue. | 0 | 345112 | 4 years, 3 weeks ago | 0|i1ngn3: |
| Generated at Fri Mar 20 00:35:51 UTC 2020 by Song Xu using Jira 8.3.4#803005-sha1:1f96e09b3c60279a408a2ae47be3c745f571388b. |